Search This Blog

Thursday, October 17, 2024

Visual Builder for Redwood HCM, Sample Redwood Adoption Plan and other tips!

As most of us know by now, Redwood for HCM keeps growing in capabilities and more features are being delivered each quarter. To that end, Oracle is hosting several virtual events/webinars that are highly informational and taking place in the coming days and weeks. 

Below I have listed the ones that are of most interest to me. Including a lot of content for Visual Builder Studio in Redwood, which is now going to be how rules and personalizations are done in HCM, replacing tools like Page Composer, and for those used to personalizing HCM from a more functional perspective, Visual Builder is a bit more technical in nature, whether it is the Express or Advanced mode. This is primarily because of how version control, publishing and deployments are done. 

The VB Studio Express sessions below are a great way to get familiar with version control, branching, CI/CD pipelines, and Visual Builder in Redwood, which is a must have skill going forward. All of these being concepts that perhaps you haven't been exposed to as an HCM techno-functional expert, but well known to those that have been extending HCM or ERP with Visual Builder in OCI for a while now.

Besides the upcoming sessions, I have also added links to documentation that you want to bookmark and keep a close eye on, and more.

Upcoming Webinars from Oracle

HCM – Getting Started with Redwood (October 2024)

https://community.oracle.com/customerconnect/events/605920-hcm-getting-started-with-redwood-october-2024

HCM – Redwood Time Card and Layout Sets

Registration link: https://community.oracle.com/customerconnect/events/605918-hcm-redwood-time-card-and-layout-sets

HCM – Personalizing HCM and SCM Cloud Applications Using VB Studio Express: Fundamentals of Git and Merge Requests

Link: https://community.oracle.com/customerconnect/events/605938-hcm-personalizing-hcm-and-scm-cloud-applications-using-vb-studio-express-fundamentals-of-git-and-merge-requests

HCM – Personalizing HCM and SCM Cloud Applications Using VB Studio Express: Branching

https://community.oracle.com/customerconnect/events/605939-hcm-personalizing-hcm-and-scm-cloud-applications-using-vb-studio-express-branching

HCM – Personalizing HCM and SCM Cloud Applications Using VB Studio Express: Handling Common Issues, Tips and Tricks


HCM – Ready for Redwood: Questions and Answers

https://community.oracle.com/customerconnect/events/605957-hcm-ready-for-redwood-questions-and-answers

Links to useful documentation

Redwood for HCM Adoption Plan from Oracle: https://community.oracle.com/customerconnect/discussion/779200/redwood-for-hcm-adoption-plan#latest

Redwood for HCM FAQs: https://community.oracle.com/customerconnect/discussion/745489/redwood-for-hcm-faqs#latest

The above adoption plan has really good information and links particularly starting from Page 19. There's also various useful links in the slides of the adoption plan, including attachments with information about VB Studio, and much more. I encourage everyone to review these, specially if you are struggling to come up with a plan of attack as to how to implement Redwood.

Customer Connect Pages to Bookmark and review periodically

HCM Resource Center: HCM Resource Center — Cloud Customer Connect (oracle.com)

Oracle AI for Fusion Applications: Oracle AI for Fusion Applications — Cloud Customer Connect

Visual Builder Studio for HCM: Visual Builder Studio for HCM — Cloud Customer Connect (oracle.com)

HCM Redwood Personalization Helper Tool: HCM Redwood Personalization Helper Tool — Cloud Customer Connect (oracle.com)

Saturday, October 5, 2024

Oracle Fusion Cloud (ERP & HCM) - Integration & Extension Strategy Reference Architecture

In today’s interconnected enterprise landscape, businesses using Oracle Fusion Cloud (FA) require seamless integration and extension capabilities to optimize operations and drive innovation. This reference architecture provides a possible blueprint for extending and integrating various systems using Oracle’s powerful suite of cloud services and tools like Oracle Integration Cloud (OIC), Oracle Analytics, ADW, ATP and more. The architecture ensures scalable, real-time data processing and business logic orchestration, enhancing overall enterprise functionality.

There are two diagrams below, the first serves as a logical architecture showing two aspects, on the left side of the image we can see a Data Analytics & Integration focused view, while the right side provides a glimpse into possible ways to extend the Fusion application, and also Application Integration & Extension capabilities in general.

The second diagram represents mostly the Data Analytics & Integration content in a sequence diagram format, for a different view into the data interactions across systems and tools.

Note: click on the images to expand them for ease of readability

Logical Architecture



Sequence Diagram



Discussion


The Data Analytics & Integration layer encompasses several tools designed to extract, transform, and load (ETL) data across systems.

Near-Real Time Capabilities


Oracle Integration Cloud (OIC) processes HCM Atom Feeds and ERP Business Events, facilitating the integration between various Oracle Cloud modules and external systems in a near real-time capacity. Notice that OIC can read data from the different Atom Feeds available and either store them as files into Object Storage, or stream them to Kafka (or OCI Streaming). On the other hand, you can configure OIC to listen to event messages from ERP, catch and then handle them, to make a subsequent API call, store them into a data mart or object storage, a Kafka topic for consumers to process the data, etc.

There's various advantages and things to be aware of when using the Atom Feeds or Business Events, but the key point is that all the data attributes you seek may not be available in them, so they are oftentimes not the final solution to your real-time data needs, but certainly a strong option to be explored, and likely to meet a lot of your needs, and when combined with Kafka, using the OIC adapter, you can avoid the responsibility of delivering data to individual targets and consumers, and just own delivering the data to the various Kafka topics, allowing consumers to subscribe and handle the complexity thereon.

Bulk Extraction Capabilities


BICC and BIP/HCM Extracts commit data into Oracle Object Storage, for bulk data extraction needs, where the Autonomous Data Warehouse (ADW) DataFlow feature transforms it before loading into the ADW for analytics. You can also use other tools like OCI DI or ODI to ingest the files from Object Storage into the ADW, but I would certainly try to make due with the DataFlow feature in the ADW since it is free and powerful, allowing reduction of tools used and a lower cost of ownership.

The architecture also features GoldenGate for real-time data replication from your ADW to other databases you may have On-Prem or elsewhere, and Kafka Clusters for streaming data between systems like the ADW and On-Premise Data Marts, using native Kafka adapters, ensuring continuous data flow for analytics and decision-making, once the data has been delivered and curated in your ADW.

Something to note is that streaming directly from your Fusion (FA) environments to an ADW or elsewhere, is not yet available, although Oracle has recently announced that their FDI platform will likely introduce this capability over time, and when that becomes a reality, it would become a potential replacement for BICC and BIP/HCM Extracts, assuming it meets all the needs. This is important because there's constraints with how often you can extract data with BICC (frequency wise) and what kind of data you can get to, so that is why you will likely end up also using BIP and HCM Extracts to bulk extract data not easily available via BICC and the PVO's (Public Views).

As you noticed above, the focus was not in extending FA, but extracting data from it, so the next section deals with extensions and application integration capabilities.

Extension and Application Integration Capabilities


The VBCS & APEX Tenant and OIC Business Logic Layer outlines the interaction between low-code development platforms (APEX and Visual Builder Cloud Service) and business logic hosted in an Oracle ATP (the ADW's cousin, tuned for transactional needs).

Here, your ATP is the tenant database for VBCS, rather than it's very small embedded version, giving you more storage, horse-power and access to query your VBCS BO data and also customize the backend, and also VBCS connects to the ATP via Oracle Rest Data Services (ORDS) to interact with custom PLSQL that has been exposed as a REST Service, for use cases like a custom error handling layer in the ATP that all your VBCS solutions can log errors and warnings to, etc. Additionally, we are leveraging ORDS for high-volume API services, where external systems can directly call the ATP via PLSQL you have exposed as REST, essentially using your ATP as an API Gateway, not needing another middle-man like Polybase or C#, etc. typically adding highly unnecessary overhead, failure points and complexity. It is worth noting that you can also proxy your ORDS services to an API Gateway (like the OCI Gateway or Apigee, etc.) instead, if you really feel you need to, because maybe you want to monetize the traffic, and for several other reasons.

OIC can also take advantage of the ATP by interacting with VBCS through PLSQL via ORDS, rather than through the Business Object API layer in VBCS, which can get really complex (and slow) depending on what you are trying to do, and it would be much more beneficial to directly access the VBCS database objects locally and just call a wrapper via ORDS/REST. Lastly, OIC can use ORDS in the ATP to offload complex business logic and just receive the results to continue processing, rather than doing that complex and heavy logic in OIC, which can certainly take longer from a performance perspective, and harder to support (think long complex orchestrations with many actions in OIC, versus a stored procedure you can easily read, and tune using Generative AI, doing the leg work, while you use OIC with it's adapters to do I/O with FA natively with the finalized artifacts).

Lastly, you can take advantage of included features in the ATP, like APEX and the Oracle Machine Learning Studio, to have conversations with your data, build compelling dashboards, reports and web solutions (the ADW also has all of these benefits).


Conclusion


This reference integration and extension architecture illustrates how Oracle Fusion Cloud can be expanded to support dynamic enterprise needs. With tools like OIC, ADW, Kafka, and GoldenGate, organizations can automate business logic, integrate disparate data sources, and streamline their analytics processes. By leveraging these components, businesses can unlock greater agility, scalability, and data-driven decision-making capabilities, ensuring they stay competitive in a rapidly evolving digital world. Additionally, you can remove non-transactional reporting and data needs from FA directly, drastically improving the performance of the application by freeing up resources for transactional activity and real-time reporting, among many other benefits both already discussed and otherwise implied.

Friday, October 4, 2024

Oracle Application Express (APEX) - Generative AI Capabilities

Oracle Application Express (APEX) continues to revolutionize the way developers create web applications with its new Generative AI assistant. This cutting-edge feature integrates artificial intelligence into APEX’s low-code environment, allowing developers to accelerate the development process like never before. With the APEX Generative AI assistant, users can now generate SQL queries, PL/SQL and JavaScript code, and even create applications simply by describing their requirements in plain language. This means less time spent writing code and more time focused on refining application logic and design.

By bridging the gap between natural language and complex code generation, the AI assistant significantly reduces the learning curve for new developers while enhancing the efficiency of experienced ones. As Oracle APEX continues to evolve, the inclusion of AI-powered features sets a new standard for rapid application development, providing a powerful toolset that enhances productivity and creativity across all skill levels.

This introduction of generative AI in APEX showcases Oracle’s commitment to integrating advanced technologies that make development more accessible, efficient, and intuitive. Whether you're a seasoned developer or just beginning your journey with APEX, the Generative AI assistant opens up new possibilities for creating robust, data-rich applications faster than ever before.

Let us take a look at how to setup the Generative AI feature and what are some of the uses cases!

Note: click the images below to expand them for ease of use and readability! 

Setup


In order to take advantage of these features, we need to allow APEX to interact with an LLM via the API layer, and to do that we want to navigate to the "Workspace Utilities" area then select the "Generative AI" option.



Once we are there, you can create a trust between your APEX instance and an LLM, in the screenshot below we see the trust setup with Open AI, using an API key that I have access to, and I also show the options available to you besides Open AI, which are Cohere and Oracle's OCI Generative AI Service (which I recommend specially if you already use Oracle Cloud Infrastructure, to take advantage of all the security you already have in place in your Virtual Cloud Network!).



APEX Assistant - SQL Workshop


Now for the exciting part! Within the SQL Workshop, you will see an option to click on the "APEX Assistant" which will open a panel on the right side, allowing you to access the query builder feature!


In the above we see the assistant joining two tables for us, in this example! We simply asked whether two tables in the current schema could be joined, and it provided the SQL statement to do just that! This can be very useful when seeking assistance in writing complex SQL for a data lineage you may not be very familiar with, and it is also something you cannot easily accomplish in another tool, like Chat GPT, because the APEX Assistant has access to the metadata in your database directly, making things easier!

Besides the query builder feature, we have the general assistance mode, where we can ask it to develop code for us, modify and optimize code, etc. In this example we ask it to write a PLSQL stored procedure:


Notice how the "insert" feature will drop the code provided into your development canvas automatically!

In this next example, we switched the language to JavaScript, and asked a question a bit more complex:


Create Applications using Gen AI


Besides the AI Assistant in the SQL Workshop, there's another very useful feature, this time in the App Builder!

Here another option is introduced, to use a conversational agent to generate apps, in addition to existing options like creating an app from a file!



This is a powerful feature to get you started with your application, and I am excited to see how it evolves in the feature allowing for more customization and increased usability, but it is definitely a step in the right direction!

Conclusion


As we saw, there's plenty of new features to be excited about in the way of Generative AI within APEX, and I am very excited to see how these features evolve over time, but they are certainly already powerful, particularly the Assistant in the SQL Workshop, and if you use APEX already, there is no reason not to jump on these very cool features!

Tuesday, October 1, 2024

UML Diagrams For Productivity and Clarity

Following up on the last two entries regarding using Python to interact with Oracle MySQL and Oracle Linux, I want to introduce a tool called PlantUML, which is a fantastic tool for creating visual aids to help your development journey, and we will create a simple UML diagram for the Python UI that we discussed in the last entry, for monitoring various services. We will also take a look at a couple of Class Diagrams for a couple of Kafka Consumer and Producer services.

Before getting to the diagram and the UML code, let's talk about how to run PlantUML locally on your machine using VS Code.

VS Code

Install VS code, it’s free.

https://code.visualstudio.com/download

Plant UML Extension


In VS code, go to the extensions marketplace, and install the PlantUML Extension.

You also need to have Java JRE on your machine, install that as well from: https://www.java.com/en/download/manual.jsp


Diagram Previews

To use it, create a new text file in VS Code, and then select PlantUml as the language. Then paste the UML code, and hit “alt + d”, to open the preview screen. You can then copy images.




Now, let's look at some examples, including one following up on our Python Monitoring UI referenced in the last blog entry.

UI Diagram


@startuml

!define RECTANGLE_SIZE_SIZE_RECTANGLE 12
skinparam componentStyle rectangle
skinparam rectangle {
  BackgroundColor<<main>> LightGray
  BackgroundColor<<option>> White
  BorderColor<<main>> Black
  BorderColor<<option>> Gray
}

rectangle "Python UI - Homepage" <<main>> {
    rectangle "Kafka Status" <<option>> 
    rectangle "Zookeeper Status" <<option>> 
    rectangle "MySQL Status" <<option>> 
    rectangle "Map View" <<option>> 
}

@enduml



Class Diagram - Consumer

Diagram depicting a consumer service in a Kafka architecture for a vehicle telemetry system.

@startuml

class KafkaConsumer {

    +consumeMessage()

    +parseJSON(String message)

    +processData(SchoolBusData busData)

}

class SchoolBusData {

    +happenedAtTime: String

    +assetId: String

    +latitude: float

    +longitude: float

    +headingDegrees: int

    +accuracyMeters: float

    +gpsSpeedMetersPerSecond: float

    +ecuSpeedMetersPerSecond: float

}

class DatabaseConnector {

    +insertSchoolBusData(SchoolBusData busData)

}

KafkaConsumer --> SchoolBusData : Parses

KafkaConsumer --> DatabaseConnector : Inserts into

DatabaseConnector --> MySQLDatabase : Stores

 

class MySQLDatabase {

    +save()

}

@enduml


Class Diagram - Producer

@startuml

class KafkaProducer {

    +produceMessage()

    +serializeToJSON(SchoolBusData busData)

    +sendToKafka(String jsonMessage)

}

class SchoolBusData {

    +happenedAtTime: String

    +assetId: String

    +latitude: float

    +longitude: float

    +headingDegrees: int

    +accuracyMeters: float

    +gpsSpeedMetersPerSecond: float

    +ecuSpeedMetersPerSecond: float

    +generateSampleData(): SchoolBusData

}

KafkaProducer --> SchoolBusData : Generates

KafkaProducer --> KafkaTopic : Sends message

class KafkaTopic {

    +receiveMessage(String jsonMessage)

}

@enduml


These are a couple of types of diagrams that you can create using PlantUML in VS Code, in another entry we will be covering a couple of different diagrams, including a sequence diagram, regarding a proposed architecture for getting data out of Oracle Fusion Cloud.

Oracle Linux and MySQL, Kafka and Python Flask Monitoring API

In this entry we will dig deeper into the previous post that dealt with invoking an Oracle MySQL stored procedure using Python. The focus of this entry is creating a Python API to monitor Kafka and MySQL running on an Oracle Linux VM. This API can be invoked from a User Interface that will allow the user to check the statuses of these different components.

To create a Python API that will execute the commands on your Oracle Linux or RHEL system to check the status of MySQL, Zookeeper, and Kafka, you can use the subprocess module in Python to run shell commands. 

Below is an example of how you can implement this.

Step-by-Step Implementation:

  • Create Python Functions to Check Status: Each function will execute the corresponding system command using subprocess.run and return the output.
  • Set Up Flask API: We'll use Flask to create a simple API that the UI can call to retrieve the status.

Python Code:

import subprocess
from flask import Flask, jsonify
app = Flask(__name__)

# Function to check MySQL status
def check_mysql_status():
try:
result = subprocess.run(['sudo', 'systemctl', 'status', 'mysqld'], capture_output=True, text=True)
return result.stdout
except subprocess.CalledProcessError as e:
return str(e)

# Function to check Zookeeper status
def check_zookeeper_status():
try:
result = subprocess.run(['sudo', 'systemctl', 'status', 'zookeeper'], capture_output=True, text=True)
return result.stdout
except subprocess.CalledProcessError as e:
return str(e)

# Function to check Kafka status
def check_kafka_status():
try:
result = subprocess.run(['sudo', 'systemctl', 'status', 'kafka'], capture_output=True, text=True)
return result.stdout
except subprocess.CalledProcessError as e:
return str(e)

# Flask API route to get MySQL status
@app.route('/status/mysql', methods=['GET'])
def get_mysql_status():
status = check_mysql_status()
return jsonify({'service': 'MySQL', 'status': status})

# Flask API route to get Zookeeper status
@app.route('/status/zookeeper', methods=['GET'])
def get_zookeeper_status():
status = check_zookeeper_status()
return jsonify({'service': 'Zookeeper', 'status': status})

# Flask API route to get Kafka status
@app.route('/status/kafka', methods=['GET'])
def get_kafka_status():
status = check_kafka_status()
return jsonify({'service': 'Kafka', 'status': status})

if __name__ == "__main__":
app.run(host='0.0.0.0', port=5000)


Explanation:

  • subprocess.run: Executes the systemctl commands to check the status of MySQL, Zookeeper, and Kafka. The capture_output=True argument captures the output, while text=True ensures the output is returned as a string.
  • Flask: Provides an API endpoint for each service, which the UI can call to check the respective statuses.
  • Routes: Each API route (/status/mysql, /status/zookeeper, /status/kafka) responds to a GET request and returns the status of the requested service in JSON format.


Running the API:

To run the Flask API, ensure Flask is installed:

pip install Flask

To Start the Application:

python your_script_name.py


Creating the UI:

For the UI, you can use any front-end technology (HTML, React, etc.) and have buttons that call these API endpoints to display the status of each service.

For example:
  • A button for MySQL could call /status/mysql.
  • A button for Kafka could call /status/kafka.
  • A button for Zookeeper could call /status/zookeeper.
Note on Permissions:

Ensure that the user running the Python script has the appropriate permissions to run the systemctl commands using sudo. You may need to modify the sudoers file to allow passwordless sudo for these commands.