Search This Blog

Thursday, October 17, 2024

Visual Builder for Redwood HCM, Sample Redwood Adoption Plan and other tips!

As most of us know by now, Redwood for HCM keeps growing in capabilities and more features are being delivered each quarter. To that end, Oracle is hosting several virtual events/webinars that are highly informational and taking place in the coming days and weeks. 

Below I have listed the ones that are of most interest to me. Including a lot of content for Visual Builder Studio in Redwood, which is now going to be how rules and personalizations are done in HCM, replacing tools like Page Composer, and for those used to personalizing HCM from a more functional perspective, Visual Builder is a bit more technical in nature, whether it is the Express or Advanced mode. This is primarily because of how version control, publishing and deployments are done. 

The VB Studio Express sessions below are a great way to get familiar with version control, branching, CI/CD pipelines, and Visual Builder in Redwood, which is a must have skill going forward. All of these being concepts that perhaps you haven't been exposed to as an HCM techno-functional expert, but well known to those that have been extending HCM or ERP with Visual Builder in OCI for a while now.

Besides the upcoming sessions, I have also added links to documentation that you want to bookmark and keep a close eye on, and more.

Upcoming Webinars from Oracle

HCM – Getting Started with Redwood (October 2024)

https://community.oracle.com/customerconnect/events/605920-hcm-getting-started-with-redwood-october-2024

HCM – Redwood Time Card and Layout Sets

Registration link: https://community.oracle.com/customerconnect/events/605918-hcm-redwood-time-card-and-layout-sets

HCM – Personalizing HCM and SCM Cloud Applications Using VB Studio Express: Fundamentals of Git and Merge Requests

Link: https://community.oracle.com/customerconnect/events/605938-hcm-personalizing-hcm-and-scm-cloud-applications-using-vb-studio-express-fundamentals-of-git-and-merge-requests

HCM – Personalizing HCM and SCM Cloud Applications Using VB Studio Express: Branching

https://community.oracle.com/customerconnect/events/605939-hcm-personalizing-hcm-and-scm-cloud-applications-using-vb-studio-express-branching

HCM – Personalizing HCM and SCM Cloud Applications Using VB Studio Express: Handling Common Issues, Tips and Tricks


HCM – Ready for Redwood: Questions and Answers

https://community.oracle.com/customerconnect/events/605957-hcm-ready-for-redwood-questions-and-answers

Links to useful documentation

Redwood for HCM Adoption Plan from Oracle: https://community.oracle.com/customerconnect/discussion/779200/redwood-for-hcm-adoption-plan#latest

Redwood for HCM FAQs: https://community.oracle.com/customerconnect/discussion/745489/redwood-for-hcm-faqs#latest

The above adoption plan has really good information and links particularly starting from Page 19. There's also various useful links in the slides of the adoption plan, including attachments with information about VB Studio, and much more. I encourage everyone to review these, specially if you are struggling to come up with a plan of attack as to how to implement Redwood.

Customer Connect Pages to Bookmark and review periodically

HCM Resource Center: HCM Resource Center — Cloud Customer Connect (oracle.com)

Oracle AI for Fusion Applications: Oracle AI for Fusion Applications — Cloud Customer Connect

Visual Builder Studio for HCM: Visual Builder Studio for HCM — Cloud Customer Connect (oracle.com)

HCM Redwood Personalization Helper Tool: HCM Redwood Personalization Helper Tool — Cloud Customer Connect (oracle.com)

Saturday, October 5, 2024

Oracle Fusion Cloud (ERP & HCM) - Integration & Extension Strategy Reference Architecture

In today’s interconnected enterprise landscape, businesses using Oracle Fusion Cloud (FA) require seamless integration and extension capabilities to optimize operations and drive innovation. This reference architecture provides a possible blueprint for extending and integrating various systems using Oracle’s powerful suite of cloud services and tools like Oracle Integration Cloud (OIC), Oracle Analytics, ADW, ATP and more. The architecture ensures scalable, real-time data processing and business logic orchestration, enhancing overall enterprise functionality.

There are two diagrams below, the first serves as a logical architecture showing two aspects, on the left side of the image we can see a Data Analytics & Integration focused view, while the right side provides a glimpse into possible ways to extend the Fusion application, and also Application Integration & Extension capabilities in general.

The second diagram represents mostly the Data Analytics & Integration content in a sequence diagram format, for a different view into the data interactions across systems and tools.

Note: click on the images to expand them for ease of readability

Logical Architecture



Sequence Diagram



Discussion


The Data Analytics & Integration layer encompasses several tools designed to extract, transform, and load (ETL) data across systems.

Near-Real Time Capabilities


Oracle Integration Cloud (OIC) processes HCM Atom Feeds and ERP Business Events, facilitating the integration between various Oracle Cloud modules and external systems in a near real-time capacity. Notice that OIC can read data from the different Atom Feeds available and either store them as files into Object Storage, or stream them to Kafka (or OCI Streaming). On the other hand, you can configure OIC to listen to event messages from ERP, catch and then handle them, to make a subsequent API call, store them into a data mart or object storage, a Kafka topic for consumers to process the data, etc.

There's various advantages and things to be aware of when using the Atom Feeds or Business Events, but the key point is that all the data attributes you seek may not be available in them, so they are oftentimes not the final solution to your real-time data needs, but certainly a strong option to be explored, and likely to meet a lot of your needs, and when combined with Kafka, using the OIC adapter, you can avoid the responsibility of delivering data to individual targets and consumers, and just own delivering the data to the various Kafka topics, allowing consumers to subscribe and handle the complexity thereon.

Bulk Extraction Capabilities


BICC and BIP/HCM Extracts commit data into Oracle Object Storage, for bulk data extraction needs, where the Autonomous Data Warehouse (ADW) DataFlow feature transforms it before loading into the ADW for analytics. You can also use other tools like OCI DI or ODI to ingest the files from Object Storage into the ADW, but I would certainly try to make due with the DataFlow feature in the ADW since it is free and powerful, allowing reduction of tools used and a lower cost of ownership.

The architecture also features GoldenGate for real-time data replication from your ADW to other databases you may have On-Prem or elsewhere, and Kafka Clusters for streaming data between systems like the ADW and On-Premise Data Marts, using native Kafka adapters, ensuring continuous data flow for analytics and decision-making, once the data has been delivered and curated in your ADW.

Something to note is that streaming directly from your Fusion (FA) environments to an ADW or elsewhere, is not yet available, although Oracle has recently announced that their FDI platform will likely introduce this capability over time, and when that becomes a reality, it would become a potential replacement for BICC and BIP/HCM Extracts, assuming it meets all the needs. This is important because there's constraints with how often you can extract data with BICC (frequency wise) and what kind of data you can get to, so that is why you will likely end up also using BIP and HCM Extracts to bulk extract data not easily available via BICC and the PVO's (Public Views).

As you noticed above, the focus was not in extending FA, but extracting data from it, so the next section deals with extensions and application integration capabilities.

Extension and Application Integration Capabilities


The VBCS & APEX Tenant and OIC Business Logic Layer outlines the interaction between low-code development platforms (APEX and Visual Builder Cloud Service) and business logic hosted in an Oracle ATP (the ADW's cousin, tuned for transactional needs).

Here, your ATP is the tenant database for VBCS, rather than it's very small embedded version, giving you more storage, horse-power and access to query your VBCS BO data and also customize the backend, and also VBCS connects to the ATP via Oracle Rest Data Services (ORDS) to interact with custom PLSQL that has been exposed as a REST Service, for use cases like a custom error handling layer in the ATP that all your VBCS solutions can log errors and warnings to, etc. Additionally, we are leveraging ORDS for high-volume API services, where external systems can directly call the ATP via PLSQL you have exposed as REST, essentially using your ATP as an API Gateway, not needing another middle-man like Polybase or C#, etc. typically adding highly unnecessary overhead, failure points and complexity. It is worth noting that you can also proxy your ORDS services to an API Gateway (like the OCI Gateway or Apigee, etc.) instead, if you really feel you need to, because maybe you want to monetize the traffic, and for several other reasons.

OIC can also take advantage of the ATP by interacting with VBCS through PLSQL via ORDS, rather than through the Business Object API layer in VBCS, which can get really complex (and slow) depending on what you are trying to do, and it would be much more beneficial to directly access the VBCS database objects locally and just call a wrapper via ORDS/REST. Lastly, OIC can use ORDS in the ATP to offload complex business logic and just receive the results to continue processing, rather than doing that complex and heavy logic in OIC, which can certainly take longer from a performance perspective, and harder to support (think long complex orchestrations with many actions in OIC, versus a stored procedure you can easily read, and tune using Generative AI, doing the leg work, while you use OIC with it's adapters to do I/O with FA natively with the finalized artifacts).

Lastly, you can take advantage of included features in the ATP, like APEX and the Oracle Machine Learning Studio, to have conversations with your data, build compelling dashboards, reports and web solutions (the ADW also has all of these benefits).


Conclusion


This reference integration and extension architecture illustrates how Oracle Fusion Cloud can be expanded to support dynamic enterprise needs. With tools like OIC, ADW, Kafka, and GoldenGate, organizations can automate business logic, integrate disparate data sources, and streamline their analytics processes. By leveraging these components, businesses can unlock greater agility, scalability, and data-driven decision-making capabilities, ensuring they stay competitive in a rapidly evolving digital world. Additionally, you can remove non-transactional reporting and data needs from FA directly, drastically improving the performance of the application by freeing up resources for transactional activity and real-time reporting, among many other benefits both already discussed and otherwise implied.

Friday, October 4, 2024

Oracle Application Express (APEX) - Generative AI Capabilities

Oracle Application Express (APEX) continues to revolutionize the way developers create web applications with its new Generative AI assistant. This cutting-edge feature integrates artificial intelligence into APEX’s low-code environment, allowing developers to accelerate the development process like never before. With the APEX Generative AI assistant, users can now generate SQL queries, PL/SQL and JavaScript code, and even create applications simply by describing their requirements in plain language. This means less time spent writing code and more time focused on refining application logic and design.

By bridging the gap between natural language and complex code generation, the AI assistant significantly reduces the learning curve for new developers while enhancing the efficiency of experienced ones. As Oracle APEX continues to evolve, the inclusion of AI-powered features sets a new standard for rapid application development, providing a powerful toolset that enhances productivity and creativity across all skill levels.

This introduction of generative AI in APEX showcases Oracle’s commitment to integrating advanced technologies that make development more accessible, efficient, and intuitive. Whether you're a seasoned developer or just beginning your journey with APEX, the Generative AI assistant opens up new possibilities for creating robust, data-rich applications faster than ever before.

Let us take a look at how to setup the Generative AI feature and what are some of the uses cases!

Note: click the images below to expand them for ease of use and readability! 

Setup


In order to take advantage of these features, we need to allow APEX to interact with an LLM via the API layer, and to do that we want to navigate to the "Workspace Utilities" area then select the "Generative AI" option.



Once we are there, you can create a trust between your APEX instance and an LLM, in the screenshot below we see the trust setup with Open AI, using an API key that I have access to, and I also show the options available to you besides Open AI, which are Cohere and Oracle's OCI Generative AI Service (which I recommend specially if you already use Oracle Cloud Infrastructure, to take advantage of all the security you already have in place in your Virtual Cloud Network!).



APEX Assistant - SQL Workshop


Now for the exciting part! Within the SQL Workshop, you will see an option to click on the "APEX Assistant" which will open a panel on the right side, allowing you to access the query builder feature!


In the above we see the assistant joining two tables for us, in this example! We simply asked whether two tables in the current schema could be joined, and it provided the SQL statement to do just that! This can be very useful when seeking assistance in writing complex SQL for a data lineage you may not be very familiar with, and it is also something you cannot easily accomplish in another tool, like Chat GPT, because the APEX Assistant has access to the metadata in your database directly, making things easier!

Besides the query builder feature, we have the general assistance mode, where we can ask it to develop code for us, modify and optimize code, etc. In this example we ask it to write a PLSQL stored procedure:


Notice how the "insert" feature will drop the code provided into your development canvas automatically!

In this next example, we switched the language to JavaScript, and asked a question a bit more complex:


Create Applications using Gen AI


Besides the AI Assistant in the SQL Workshop, there's another very useful feature, this time in the App Builder!

Here another option is introduced, to use a conversational agent to generate apps, in addition to existing options like creating an app from a file!



This is a powerful feature to get you started with your application, and I am excited to see how it evolves in the feature allowing for more customization and increased usability, but it is definitely a step in the right direction!

Conclusion


As we saw, there's plenty of new features to be excited about in the way of Generative AI within APEX, and I am very excited to see how these features evolve over time, but they are certainly already powerful, particularly the Assistant in the SQL Workshop, and if you use APEX already, there is no reason not to jump on these very cool features!

Tuesday, October 1, 2024

UML Diagrams For Productivity and Clarity

Following up on the last two entries regarding using Python to interact with Oracle MySQL and Oracle Linux, I want to introduce a tool called PlantUML, which is a fantastic tool for creating visual aids to help your development journey, and we will create a simple UML diagram for the Python UI that we discussed in the last entry, for monitoring various services. We will also take a look at a couple of Class Diagrams for a couple of Kafka Consumer and Producer services.

Before getting to the diagram and the UML code, let's talk about how to run PlantUML locally on your machine using VS Code.

VS Code

Install VS code, it’s free.

https://code.visualstudio.com/download

Plant UML Extension


In VS code, go to the extensions marketplace, and install the PlantUML Extension.

You also need to have Java JRE on your machine, install that as well from: https://www.java.com/en/download/manual.jsp


Diagram Previews

To use it, create a new text file in VS Code, and then select PlantUml as the language. Then paste the UML code, and hit “alt + d”, to open the preview screen. You can then copy images.




Now, let's look at some examples, including one following up on our Python Monitoring UI referenced in the last blog entry.

UI Diagram


@startuml

!define RECTANGLE_SIZE_SIZE_RECTANGLE 12
skinparam componentStyle rectangle
skinparam rectangle {
  BackgroundColor<<main>> LightGray
  BackgroundColor<<option>> White
  BorderColor<<main>> Black
  BorderColor<<option>> Gray
}

rectangle "Python UI - Homepage" <<main>> {
    rectangle "Kafka Status" <<option>> 
    rectangle "Zookeeper Status" <<option>> 
    rectangle "MySQL Status" <<option>> 
    rectangle "Map View" <<option>> 
}

@enduml



Class Diagram - Consumer

Diagram depicting a consumer service in a Kafka architecture for a vehicle telemetry system.

@startuml

class KafkaConsumer {

    +consumeMessage()

    +parseJSON(String message)

    +processData(SchoolBusData busData)

}

class SchoolBusData {

    +happenedAtTime: String

    +assetId: String

    +latitude: float

    +longitude: float

    +headingDegrees: int

    +accuracyMeters: float

    +gpsSpeedMetersPerSecond: float

    +ecuSpeedMetersPerSecond: float

}

class DatabaseConnector {

    +insertSchoolBusData(SchoolBusData busData)

}

KafkaConsumer --> SchoolBusData : Parses

KafkaConsumer --> DatabaseConnector : Inserts into

DatabaseConnector --> MySQLDatabase : Stores

 

class MySQLDatabase {

    +save()

}

@enduml


Class Diagram - Producer

@startuml

class KafkaProducer {

    +produceMessage()

    +serializeToJSON(SchoolBusData busData)

    +sendToKafka(String jsonMessage)

}

class SchoolBusData {

    +happenedAtTime: String

    +assetId: String

    +latitude: float

    +longitude: float

    +headingDegrees: int

    +accuracyMeters: float

    +gpsSpeedMetersPerSecond: float

    +ecuSpeedMetersPerSecond: float

    +generateSampleData(): SchoolBusData

}

KafkaProducer --> SchoolBusData : Generates

KafkaProducer --> KafkaTopic : Sends message

class KafkaTopic {

    +receiveMessage(String jsonMessage)

}

@enduml


These are a couple of types of diagrams that you can create using PlantUML in VS Code, in another entry we will be covering a couple of different diagrams, including a sequence diagram, regarding a proposed architecture for getting data out of Oracle Fusion Cloud.

Oracle Linux and MySQL, Kafka and Python Flask Monitoring API

In this entry we will dig deeper into the previous post that dealt with invoking an Oracle MySQL stored procedure using Python. The focus of this entry is creating a Python API to monitor Kafka and MySQL running on an Oracle Linux VM. This API can be invoked from a User Interface that will allow the user to check the statuses of these different components.

To create a Python API that will execute the commands on your Oracle Linux or RHEL system to check the status of MySQL, Zookeeper, and Kafka, you can use the subprocess module in Python to run shell commands. 

Below is an example of how you can implement this.

Step-by-Step Implementation:

  • Create Python Functions to Check Status: Each function will execute the corresponding system command using subprocess.run and return the output.
  • Set Up Flask API: We'll use Flask to create a simple API that the UI can call to retrieve the status.

Python Code:

import subprocess
from flask import Flask, jsonify
app = Flask(__name__)

# Function to check MySQL status
def check_mysql_status():
try:
result = subprocess.run(['sudo', 'systemctl', 'status', 'mysqld'], capture_output=True, text=True)
return result.stdout
except subprocess.CalledProcessError as e:
return str(e)

# Function to check Zookeeper status
def check_zookeeper_status():
try:
result = subprocess.run(['sudo', 'systemctl', 'status', 'zookeeper'], capture_output=True, text=True)
return result.stdout
except subprocess.CalledProcessError as e:
return str(e)

# Function to check Kafka status
def check_kafka_status():
try:
result = subprocess.run(['sudo', 'systemctl', 'status', 'kafka'], capture_output=True, text=True)
return result.stdout
except subprocess.CalledProcessError as e:
return str(e)

# Flask API route to get MySQL status
@app.route('/status/mysql', methods=['GET'])
def get_mysql_status():
status = check_mysql_status()
return jsonify({'service': 'MySQL', 'status': status})

# Flask API route to get Zookeeper status
@app.route('/status/zookeeper', methods=['GET'])
def get_zookeeper_status():
status = check_zookeeper_status()
return jsonify({'service': 'Zookeeper', 'status': status})

# Flask API route to get Kafka status
@app.route('/status/kafka', methods=['GET'])
def get_kafka_status():
status = check_kafka_status()
return jsonify({'service': 'Kafka', 'status': status})

if __name__ == "__main__":
app.run(host='0.0.0.0', port=5000)


Explanation:

  • subprocess.run: Executes the systemctl commands to check the status of MySQL, Zookeeper, and Kafka. The capture_output=True argument captures the output, while text=True ensures the output is returned as a string.
  • Flask: Provides an API endpoint for each service, which the UI can call to check the respective statuses.
  • Routes: Each API route (/status/mysql, /status/zookeeper, /status/kafka) responds to a GET request and returns the status of the requested service in JSON format.


Running the API:

To run the Flask API, ensure Flask is installed:

pip install Flask

To Start the Application:

python your_script_name.py


Creating the UI:

For the UI, you can use any front-end technology (HTML, React, etc.) and have buttons that call these API endpoints to display the status of each service.

For example:
  • A button for MySQL could call /status/mysql.
  • A button for Kafka could call /status/kafka.
  • A button for Zookeeper could call /status/zookeeper.
Note on Permissions:

Ensure that the user running the Python script has the appropriate permissions to run the systemctl commands using sudo. You may need to modify the sudoers file to allow passwordless sudo for these commands.

Sunday, September 29, 2024

Connecting to an Oracle MySQL Database and Invoking a Stored Procedure using Python

In this entry, we will explore how to use Python to connect to an Oracle MySQL database and invoke a stored procedure. This is particularly useful when you're looking to interact with your database programmatically—whether it's for inserting data, querying, or managing business logic through stored procedures.

We will walk through a Python script that connects to the database, generates some random test data, in this case telemetry data for school busses, and invokes a stored procedure to insert the data into multiple tables. Below is the full Python code, and we’ll break down each part of the code in detail to help you understand how it works.

This assumes that you have Oracle MySQL Running, a stored procedure to call, and the proper credentials to access your database, you can modify this script and use a stored procedure of your own!

Here's the full Python script that connects to the MySQL database, generates random test data, and calls the stored procedure InsertEvents.

Script:

import mysql.connector

from mysql.connector import errorcode

import random

import datetime

# Database connection parameters

config = {

    'user': 'yourusername',

    'password': 'yourpassword',

    'host': 'ip address of vm running mysql',

    'port': port you are using,

    'database': 'name of your mysql database',

}

# Function to generate random test data

def generate_test_data():

    asset_id = random.randint(1000, 9999)

    happened_at_time = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')

    latitude = round(random.uniform(-90, 90), 6)

    longitude = round(random.uniform(-180, 180), 6)

    heading_degrees = random.randint(0, 360)

    accuracy_meters = round(random.uniform(0.5, 20.0), 2)

    geofence = 'TestGeofence' + str(random.randint(1, 100))

    gps_speed = round(random.uniform(0, 30), 2)

    ecu_speed = round(random.uniform(0, 30), 2)


    return (asset_id, happened_at_time, latitude, longitude, heading_degrees, accuracy_meters, geofence, gps_speed, ecu_speed)


# Function to insert event data by calling the stored procedure InsertEvents

def insert_event_data(cursor, data):

    try:

        cursor.callproc('InsertEvents', data)

        print(f"Successfully inserted event for asset_id {data[0]}")

    except mysql.connector.Error as err:

        print(f"Error: {err}")

        return False

    return True


# Main function to connect to the database and insert events

def main():

    try:

        # Connect to the database

        cnx = mysql.connector.connect(**config)

        cursor = cnx.cursor()


        # Ask the user how many events to generate

        num_events = int(input("Enter the number of events to generate: "))


        # Generate and insert the specified number of test events

        for _ in range(num_events):

            test_data = generate_test_data()

            if insert_event_data(cursor, test_data):

                cnx.commit()

            else:

                cnx.rollback()


        cursor.close()

        cnx.close()


    except mysql.connector.Error as err:

        if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:

            print("Something is wrong with your username and/or password")

        elif err.errno == errorcode.ER_BAD_DB_ERROR:

            print("Database does not exist or is not reachable")

        else:

            print(err)


if __name__ == "__main__":

    main()

Inserting data using the stored procedure:

The insert_event_data() function calls the stored procedure InsertEvents and passes in the data.

cursor.callproc('InsertEvents', data): This method invokes the stored procedure InsertEvents with the data tuple as input parameters.

If the procedure is successful, a success message is printed; otherwise, an error message is displayed.

Connecting to the MySQL Database:

The main () function handles the database connection and inserts a user-specified number of events.

The mysql.connector.connect(**config) establishes a connection to the MySQL database using the provided configuration.

The script asks the user to input the number of events they wish to generate (this will prompt the user in the console).

It then generates the random data and inserts it by invoking the stored procedure. The transactions are committed or rolled back based on whether the insertion was successful.

To run the script:

Make sure your database connection parameters are correct.

Ensure that the stored procedure, like InsertEvents, is defined in your MySQL database.

Run the Python script, which will prompt you to enter the number of events to generate and insert them into the database.

Wrapping up:

This script demonstrates a simple yet powerful way to connect to an Oracle MySQL database using Python and invoke a stored procedure. With a little adjustment, you can use it for various operations, such as fetching data, updating records, or automating tasks. By leveraging Python and MySQL Connector, you can efficiently manage your data and workflows, at no cost! Remember that you can create an always free account in Oracle Cloud Infrastructure (OCI), and provision an always free Compute instance, I recommend Linux and then running MySQL and Python there, all free!

Sunday, April 21, 2024

Keeping Track of Redwood Enabled Pages

As we continue to make progress in our HCM Redwood journey, and we have done exploratory work in a lower environment and are working on a timeline and milestones, we ran into an issue that although it seems simple, it can cause a problem.

Something that was slowing us down was not knowing exactly what pages are Redwood enabled. This is from the perspective of basically us losing track of what profile options have been enabled, as we do our testing and exploration, because with Redwood it isn't an all or nothing setup, you have to choose which profile options to enable for which pages, and although this gives you a granular level of control, it can cause you to lose track.

We received the below code from our Oracle partners that can be used to check the status of enabled Redwood profiles in a given environment, this is a very helpful piece of code that can save you time and energy.

This SQL can be used to check current status in environments for redwood profile 

SELECT
           po.profile_option_name ,
           user_profile_option_name ,
           level_value ,
           profile_option_value ,
           (
           CASE
               val.last_update_login
                                                       WHEN '-1'
                                                       THEN 'No'
               ELSE 'Yes'
           END ) Overridden ,
           po.start_date_active ,
           po.end_date_active ,
           potl.source_lang ,
           potl.language
       FROM
           fusion.fnd_profile_option_values val ,
           fusion.fnd_profile_options_tl potl ,
           fusion.fnd_profile_options_b po
       WHERE
           val.profile_option_id = po.profile_option_id
           AND po.profile_option_name = potl.profile_option_name
           AND potl.language = 'US'
           AND level_value='SITE'
           AND ( po.profile_option_name like '%VBCS%' or po.profile_option_name like '%REDWOOD%' )
            AND po.seed_data_source like 'hcm/%'

This is important because there's multiple people working on an environment, and also as you progress with your Redwood changes to higher environments, you can compare what has been enabled as another safety measure. For example, this SQL can be used to compare Production and UA, to ensure that both environments are in sync, and that mistakes weren't made by enabling functionality that was not tested. Also note that the seed_data_source value in the example above is 'hcm', but the same SQL can be used for other subject areas.

Saturday, April 20, 2024

Oracle AI Documentation and Information

Everyone knows that AI and ML are the trending topics, and Oracle is certainly investing heavily relative to their own AI strategy.

The below links to information are great resources relative to AI documentation and information that Oracle has made available, and should definitely be explored for anyone interested in seeing what Oracle has to offer in this space!

1.      AI Overview - OCI Generative AI service provides customizable large language models (LLMs) that cover a wide range of use cases for text generation.

https://docs.oracle.com/en-us/iaas/Content/generative-ai/overview.htm#overview

2.      Concepts for Generative AI - concepts and terms related to the OCI Generative AI service

https://docs.oracle.com/en-us/iaas/Content/generative-ai/concepts.htm

3.      PII Data with AI – How to safeguard the corporate jewels and sensitive customer data when using AI

https://orasites-prodapp.cec.ocp.oraclecloud.com/site/ai-and-datascience/post/rotect-pii-customer-data-ads-pii-operators

4.      Oracle AI GitHub Repository -   Great examples of LLM’s and AI services with step by step code examples with OCI AI services

https://github.com/oracle-samples/oci-data-science-ai-samples/

5.      Live Labs – 40 Hands on Labs for AI services and related services. This is a great source for any Oracle Technology or product https://apexapps.oracle.com/pls/apex/dbpm/r/livelabs/livelabs-workshop-cards?session=108343633199478

6.      Oracle AI and Data Science YouTube Channel – 

https://www.youtube.com/playlist?list=PLKCk3OyNwIzv6CWMhvqSB_8MLJIZdO80L

7.      OCI Data Science Landing Pad – starting point for setting up a OCI tenancy using the Data Science Service.  Step by Step for setting up service, creating models, and submitting jobs

https://docs.oracle.com/en-us/iaas/data-science/using/home.htm

8.      Embedded AI and ML capabilities in Oracle Analytics -

https://orasites-prodapp.cec.ocp.oraclecloud.com/site/ai-and-datascience/post/discover-the-power-of-oracle-analytics-with-ai

9.      Oracle Analytics Cloud and AI for Business Users

https://www.oracle.com/business-analytics/analytics-platform/capabilities/ai-ml/

10. Intro to Select AI in Autonomous DB

https://blogs.oracle.com/machinelearning/post/introducing-natural-language-to-sql-generation-on-autonomous-database

11. Select AI -  Use Select AI to Generate SQL from Natural Language Prompts

https://docs.oracle.com/en-us/iaas/autonomous-database-serverless/doc/sql-generation-ai-autonomous.html#GUID-9CE75F94-7455-4C09-A3F3-118C08E82B7E

12. Machine Learning Blog Page – This Blog Homepage is focused on AutoML generating code for SQL, Python, and R

https://blogs.oracle.com/machinelearning/

13. Machine Learning Documentation for SQL, Pyton, R, Spark

https://docs.oracle.com/en/database/oracle/machine-learning/index.html

14. Machine Learning Technical Replays and Office Hours

https://asktom.oracle.com/ords/r/tech/catalog/series-landing-page?p5_oh_id=6801#sessions

15. LiveLabs Hands on training for Machine learning

https://apexapps.oracle.com/pls/apex/f?p=133:100:8251167255223::::SEARCH:machine%20learning

Oracle Rest Dataservices (ORDS) - An Introduction, Use Cases and Examples

Oracle Rest Dataservices (ORDS) is truly an underrated feature of the Oracle Database, and now with it being an embedded feature in the Cloud Autonomous Databases, it's truly a feature that is a MUST!

Click on the images to zoom in!


So what? How is it useful? Well, here is a use case where Oracle Integration Cloud is interfacing with Visual Builder to push and pull data from the VBCS Business Objects (BO's) using the REST API layer on the BO's. This is a real use case that many Oracle clients have, specially those that implemented OIC and VBCS back in the Gen 1 days, and using the REST support at the BO layer was the best approach to integrate with VBCS.


In the above we can see that the individual calls to update BO attributes can take a long time, because OIC is iterating through a lot of data, making decisions and making updates. There's also plenty of opportunities for disruption during the API chains, and retriggering is far too complex.

Let us know observe the same scenario, but utilizing ORDS instead:


As noted above, this is far simpler, performs better, less failure points, easier to maintain and support, this is the power of ORDS!

It is a perfect match for VBCS because visual builder wants to speak REST, and ORDS provides an excellent opportunity for that to happen while also taking advantage of the power of the stored procedures to manipulate the BO's at the database level without hitting the VBCS middle tier and relying on complex API interactions.

Below are some additional use cases:


To expand further on the above:

VBCS calls a logging/audit stored procedure exposed as REST via ORDS, to log exceptions being raised in the action chains in VBCS (blocks of code), because right now we have to have user sessions and capture the problem real time via dev tools, unless the message is being handled and shown to the user on the screen, and a lot of them are not, so this is a great way to be proactive and see trends of errors and warnings from user activity.

Many are in the process of replacing BIP reports being used as custom API’s and called from VBCS (which is a common pattern that many have used), by instead utilizing ERP/HCM API’s where feasible, but there's always cases where we will still need to use BIP in this manner, because API support doesn't have the columns or filters required, or the API may just not exist, as an example.

My recommendation here is to instead schedule the BIP extracts and load the data to the VBCS ATP (if you have your own tenant database for VBCS, which you should), then create stored procedures to retrieve it from VBCS via REST, and that way you do not hammer ERP/HCM via the BIP layer, and just hit it once when the scheduled job runs instead. If you don't use VBCS but extract data out of ERP/HCM into your own downstream database, you can do the same rather than allowing direct API traffic into ERP/HCM, and this isn't limited to BIP extracts. Meaning, you can extract data from Fusion using BICC, and other means better than BIP, into a warehouse or data lake and then use ORDS to create an API layer for systems, if the frequency of data needs align.

Lastly, the below talks about the simplicity of deploying ORDS assets, which is very important due to stringent requirements from TO Security around code quality scanning and vulnerabilities with tools like SonarQube. Since ORDS and APEX when exported are just SQL and PLSQL files, there's not much to be concerned about, unlike with high code frameworks with extensive projects with hundreds of files in the build artifacts).


ORDS is an excellent tool on it's own right, and when paired with APEX it can do wonders, but that's a topic for another day!

Thursday, April 11, 2024

Oracle LiveLabs - Hands On Education


Hello everyone!

I wanted to bring to your awareness this capability offered by Oracle LiveLabs Home (oracle.com)

Oracle LiveLabs gives you access to Oracle's tools and technologies to run a wide variety of labs and workshops using an Oracle tenancy. There’s extremely good content for Developers, DevOps, Data Engineers, Architects and Data Scientists (including content around data lakes, data warehouses, analytics, ML, AI, etc.), particularly in, but not limited to, the OCI space.

Checkout the full blog on LinkedIn at https://www.linkedin.com/pulse/oracle-livelabs-hands-education-julio-lois-flwze

Sunday, March 24, 2024

Oracle OCI Gen AI Services and Enhancing Developer Productivity

Let’s talk about Oracle’s OCI Gen AI Service, Generative AI Service | Oracle [oracle.com], in the context of the developer productivity opportunities that exist because of it which can transform development shops to be more efficient all-around!

I am currently exploring the OCI service for areas such as the below, and will write a follow-up entry relative to my findings:

  • Code generation and auto-completion: With generative AI, the potential to write code using AI to greatly speed up building extensions and integrations will be a game changer, much like how data scientists can now write R & Python code exponentially faster using ChatGPT like services, so can writing extensions and integrations become far less manual for you.
    • More time could be spent on design, unit testing and other aspects versus manual development.
  • Code refactoring & bug fixing: Asking the AI to code review, improve, make suggestions around implementation for custom code is something already available for high code programming languages like C#, Python, PLSQL and Java in tools like ChatGPT, and this can increase quality and reduce bugs.
    • You can embed AI reviews to your peer review process, as well as during the build process to optimize performance and reduce logic errors.
    • To help with KT’s when flexing with staff or when a developer is touching a code base they did not previously own, the developer can ask the AI service to explain logic, speeding up the learning process exponentially.
  • Automated Test Generation: having the ability for the AI to generate test scenarios and test cases, in supported frameworks, based on the implementation and logic would save time and reduce defects.
    • We can ask the AI to interpret code and suggest unit test scenarios and even build them (depending on the framework).
  • Code comparisons: rather than using tools like Beyond Compare, to manually inspect differences in code bases, you can ask the AI to inspect it for you and produce a comparison report with intelligence built in (meaning, really explain what is different, not just highlight text differences).
  • Code notation & summarization: imagine uploading code to the AI service and asking for a detailed implementation report with steps and explanations, and even a technical design and graphical support such as sequence diagrams.
    • This would be very helpful for custom code where technical designs were not clearly documented or not documented at all (more prevalent now due to the Agile methodology putting less emphasis on documentation) and useful for onboarding new developers and support staff, for product delivery to hand off artifacts to operations, etc.
    • The time savings from not having to write detailed technical designs would be fantastic, and in general to speed up any developer working on a case by asking the AI to summarize and explain sections of the code.
I can produce many other scenarios, but I think I’ve made my point.

So, why Oracle since there’s similar services in the industry? I believe the Oracle services can have a competitive advantage because Oracle AI data models should yield higher accuracy with frameworks such as Oracle JET, Java, PLSQL, Fast Formulas, etc. since Oracle owns those frameworks and uses the AI service internally as part of their own DevOps processes, and the AI model could learn overtime as your developers utilize the service, and it would learn from usage, on top of Oracle’s tuning of the service over time. Lastly, it would be within your secure VCN and OCI environment, so privacy and security should not be a concern if you already have an OCI tenancy, and this is a major factor for many wishing to adopt Gen AI safely.

I think the usage of this, for the moment, is limited to high code frameworks such as Java, PLSQL, C#, Oracle JET (JavaScript), Python etc. but we would be very interested in extending the usability and benefits to middleware technology such as Oracle Integration, for the same reasons listed in this blog.

To that end, I have raised an Idea in Customer connect for the integration of the OCI Gen AI Service to Oracle Integration Gen 3 (OIC) going back to September of last year, so please support this idea over in Cloud Customer Connect by voting and commenting on it (Idea Number: 713448): Generative AI for OIC — Cloud Customer Connect (oracle.com) [community.oracle.com]

Stay tuned for my findings over the next few weeks, as I explore the Gen AI service for these use cases!

Oracle Generative AI Strategy and Options

The software industry continues to innovate and iterate upon AI capabilities, and Oracle is clearly investing heavily in this space as well, with very exciting developments being announced recently.

Below are highly informative strategy updates that you may want to review relative to Oracle’s AI strategy and recent developments.
The below graphic shows Oracle's AI Technical Stack and where recent investments have been made:


Click on the image to maximize it

These AI services are the same used by Oracle internally to develop AI capabilities in the Fusion applications and Fusion analytics, etc. now exposed for customers to utilize as well.

Something that Greg mentions in the first video, is the GenAI Agents beta recently launched, that is a service that allows you to have a conversation with your data within your autonomous database. Also, there's a new feature now called "Autonomous database select AI", also seen above, here's a GREAT blog about it: Introducing Select AI - Natural Language to SQL Generation on Autonomous Database (oracle.com)

I think that both the GenAI Agents and the Select AI feature should be considered as part of any modern Data Strategy, particularly when the data sources are Oracle applications (such as ERP & HCM), once your Autonomous Datawarehouse (ADW) has the Fusion data in it via BICC, you can use these features there without moving the data to a third-party tool to do similar operations (which can increase your cost of ownership and decrease technology technical stack harmony (meaning using too many vendor products unnecessarily)).

Imagine transforming part of your workforce from writing reports to being able to have conversations with the data without A) Having to move it elsewhere B) Having to spend a lot of time writing complex queries and designing intricate reports. Their workload could shift from designing and building reports, to tuning the data model and talking with the data, and this could then be expanded to end users over time, where internal teams would then focus on data model tuning, and everyone else is just talking with the data.

Additionally, this would all be happening within the secure boundaries of your OCI tenancy, reducing concerns around privacy and security that often worries the mind!

Real life examples for those using ERP and HCM that could be made possible in the near future:

%sql

SELECT AI how many invoices are past due

SELECT AI how many suppliers do we consistently not pay on time, and what are the reasons

SELECT AI how many expenses will be past due by next week

SELECT AI how many people under my organization may retire over the next 5 years

SELECT AI how many people under my organization will lose vacation by end of year

No more reports, just conversations with the data..!

Monday, March 18, 2024

Oracle Fusion Cloud - BIP Performance Tuning Tips and Documentation

During the early days of Oracle Cloud being adopted relative to SaaS technologies like ERP and HCM, it was quite common to develop extracts and reports using complex custom sql data models that would either be downloaded by users or scheduled to extract data and interface it to external systems. Overtime, Oracle has released guidelines and best practices to follow, and efforts like the SQL guardrails have emerged to prevent poor performing custom SQL from impacting environment performance and stability. To that end, I have been aggregating useful links to documentation around this topic from our interactions with Oracle Support over the past few months, which are consolidated in this post.

Links to Documentation:

For scheduled reports, Oracle recommends the following guidelines:

  • Having a temporary backlog (wait queue) is expected behavior, as long as the backlog get cleared over the next 24 hours.
  • If the customer expect the jobs to get picked up immediately, submit via ‘online’ and wait – as long as they not hit 500 sec limit.
  • If there are any jobs that need to be processed with high priority (over the rest), it's advised to mark reports as ‘critical’ so that they picked up by the first available thread.
  • Oracle advises customers to tune their custom reports so that they complete faster and not hold threads for long time.
  • Oracle advises customers schedule less impactful jobs during off-peak or weekend hours – manage scheduler resource smart.
Additionally, note the following:
  • With Release 13 all configuration values including BI Publisher memory guard settings are preset based on your earlier Pod sizing request and cannot be changed.
  • For memory guard the Oracle SaaS performance team has calculated and set the largest values that still provide a robust and stable reporting environment for all users to meet business requirements.
  • The BI Service must support many concurrent users and these settings act as guard rails so an individual report cannot disrupt your entire service and impact the business.
Ultimately, effective instance management is critical for ensuring that your Cloud HCM system is running smoothly and effectively. Allocating resources based on the usage and demand will require co-ordination with various teams. There is a common misunderstanding that each HCM tool such as HDL, HCM extracts, or manual ESS job submissions operates on its own pool of threads. However, in reality, they all share the same ESS pool of threads. It is, therefore, advisable for customers to properly maintain and optimize their runbook to avoid overburdening the system and creating resource constraints.

Lastly, depending on the size of your pods, you have the option to allocate pods for specific tasks. For example:
  • BulkLoading/ Performance testing/Payroll parallel runs: Pod with highest threads is a good candidate be utilized for bulk data loading, payroll parallel runs, and similar resource-intensive tasks such as performance testing.
The below graphic shows how ESS Threads are consumed, to exemplify the statements made prior:



Saturday, January 20, 2024

Oracle Redwood Migration and Adoption


Oracle continues to migrate, re-design and implement new features utilizing their Redwood design system, and HCM continues to be a big focus relative to these efforts in the coming releases (it already has been, but it's really picking up steam now with the end in sight!). In yesterday's office hours for HCM Redwood adoption, hosted by Oracle, it was clear that Redwood will likely become fully mandatory by 25B, for HCM, and Oracle was clear that if you haven't opted in by then, that they would move you!

In this post we won't go into a lot of details relative to what Redwood is, but in short, it is a modern design approach, bringing an updated UI that leverages AI and ML very well, while delivering a more modern and engaging usability experience for the users. Instead, we will focus on key points relative to adoption that you will want to make note of.

We already covered that full adoption is planned for 25B (around the April 2025 timeframe), however, there's accelerated adoption points that need to be considered, meaning, you don't have until 25B to opt in for the below:

  • Redwood Learning self-service mandatory for learners and managers (along with other select pages being enabled) - 24B
  • Checklists and Onboarding replaced by Journeys (Redwood) - 24D
  • Time and labor will transition to Redwood (24D), meaning features like timecards will have a new look and feel, etc.
The guidance right now is that current live customers start building an adoption roadmap and select lower environments to start enabling Redwood in desired areas to start performing impact analysis, and learning. There's also guidance for new implementations and in-flight implementations, and basically the message is to adopt Redwood to the maximum extent possible, to avoid significant changes soon after your Go-Live.

Oracle also urges clients to inventory their assets, and they list the below, as areas of consideration:
  • Page Composer personalizations
  • Transaction Design Studio personalizations
  • AutoComplete rules: Defaulting and Validations
  • Approvals and Notifications
To the above I would add your fast formulas (which it's not entirely clear whether significant impact is to be expected, and the question remains), and your visual builder extensions, particularly those built using the HCM embedded VB Studio capability. They also mentioned that security is the cause for 50% of reported issues with Redwood, because Redwood may require new security privileges, meaning, test your custom access and roles. Additionally, Oracle mentions that a tool will be released soon that can be used as a starting point to catalog assets and potential impacts, in the way of a report that you can run on your environment, the tool is called the "Redwood Adoption Analysis Tool". Make sure to monitor Cloud Customer connect for when that is announced, as it surely will come in very handy. On the topic of Cloud Customer connect, you can submit questions and request help there in the "Ask a Question" area, make sure to use the Redwood tag, and also, monitor the site for events such as webinars and office hours.

If you are a current live client (HCM), not utilizing Redwood yet to great extent, I recommend you dedicate an environment and go enable as much as you can there, then build a roadmap with milestones, and a test plan, etc. Additionally, change management will be very important, communications, trainings, job aid's, demos, because it is a significant change from a user experience standpoint.

Lastly, below are several useful resources relative to Redwood:
  • MOS article on enabling redwood pages- HCM Redwood Pages with Profile Options – My Oracle Support Doc ID 2922407.1
  • Fusion HCM: Redwood Required Steps for Environment Provisioned on Release 24A - Doc ID 2997123.1
  • Extending Redwood Applications using Visual Builder Studio - Doc ID 2991662.1