Search This Blog

Sunday, March 24, 2024

Oracle OCI Gen AI Services and Enhancing Developer Productivity

Let’s talk about Oracle’s OCI Gen AI Service, Generative AI Service | Oracle [oracle.com], in the context of the developer productivity opportunities that exist because of it which can transform development shops to be more efficient all-around!

I am currently exploring the OCI service for areas such as the below, and will write a follow-up entry relative to my findings:

  • Code generation and auto-completion: With generative AI, the potential to write code using AI to greatly speed up building extensions and integrations will be a game changer, much like how data scientists can now write R & Python code exponentially faster using ChatGPT like services, so can writing extensions and integrations become far less manual for you.
    • More time could be spent on design, unit testing and other aspects versus manual development.
  • Code refactoring & bug fixing: Asking the AI to code review, improve, make suggestions around implementation for custom code is something already available for high code programming languages like C#, Python, PLSQL and Java in tools like ChatGPT, and this can increase quality and reduce bugs.
    • You can embed AI reviews to your peer review process, as well as during the build process to optimize performance and reduce logic errors.
    • To help with KT’s when flexing with staff or when a developer is touching a code base they did not previously own, the developer can ask the AI service to explain logic, speeding up the learning process exponentially.
  • Automated Test Generation: having the ability for the AI to generate test scenarios and test cases, in supported frameworks, based on the implementation and logic would save time and reduce defects.
    • We can ask the AI to interpret code and suggest unit test scenarios and even build them (depending on the framework).
  • Code comparisons: rather than using tools like Beyond Compare, to manually inspect differences in code bases, you can ask the AI to inspect it for you and produce a comparison report with intelligence built in (meaning, really explain what is different, not just highlight text differences).
  • Code notation & summarization: imagine uploading code to the AI service and asking for a detailed implementation report with steps and explanations, and even a technical design and graphical support such as sequence diagrams.
    • This would be very helpful for custom code where technical designs were not clearly documented or not documented at all (more prevalent now due to the Agile methodology putting less emphasis on documentation) and useful for onboarding new developers and support staff, for product delivery to hand off artifacts to operations, etc.
    • The time savings from not having to write detailed technical designs would be fantastic, and in general to speed up any developer working on a case by asking the AI to summarize and explain sections of the code.
I can produce many other scenarios, but I think I’ve made my point.

So, why Oracle since there’s similar services in the industry? I believe the Oracle services can have a competitive advantage because Oracle AI data models should yield higher accuracy with frameworks such as Oracle JET, Java, PLSQL, Fast Formulas, etc. since Oracle owns those frameworks and uses the AI service internally as part of their own DevOps processes, and the AI model could learn overtime as your developers utilize the service, and it would learn from usage, on top of Oracle’s tuning of the service over time. Lastly, it would be within your secure VCN and OCI environment, so privacy and security should not be a concern if you already have an OCI tenancy, and this is a major factor for many wishing to adopt Gen AI safely.

I think the usage of this, for the moment, is limited to high code frameworks such as Java, PLSQL, C#, Oracle JET (JavaScript), Python etc. but we would be very interested in extending the usability and benefits to middleware technology such as Oracle Integration, for the same reasons listed in this blog.

To that end, I have raised an Idea in Customer connect for the integration of the OCI Gen AI Service to Oracle Integration Gen 3 (OIC) going back to September of last year, so please support this idea over in Cloud Customer Connect by voting and commenting on it (Idea Number: 713448): Generative AI for OIC — Cloud Customer Connect (oracle.com) [community.oracle.com]

Stay tuned for my findings over the next few weeks, as I explore the Gen AI service for these use cases!

Oracle Generative AI Strategy and Options

The software industry continues to innovate and iterate upon AI capabilities, and Oracle is clearly investing heavily in this space as well, with very exciting developments being announced recently.

Below are highly informative strategy updates that you may want to review relative to Oracle’s AI strategy and recent developments.
The below graphic shows Oracle's AI Technical Stack and where recent investments have been made:


Click on the image to maximize it

These AI services are the same used by Oracle internally to develop AI capabilities in the Fusion applications and Fusion analytics, etc. now exposed for customers to utilize as well.

Something that Greg mentions in the first video, is the GenAI Agents beta recently launched, that is a service that allows you to have a conversation with your data within your autonomous database. Also, there's a new feature now called "Autonomous database select AI", also seen above, here's a GREAT blog about it: Introducing Select AI - Natural Language to SQL Generation on Autonomous Database (oracle.com)

I think that both the GenAI Agents and the Select AI feature should be considered as part of any modern Data Strategy, particularly when the data sources are Oracle applications (such as ERP & HCM), once your Autonomous Datawarehouse (ADW) has the Fusion data in it via BICC, you can use these features there without moving the data to a third-party tool to do similar operations (which can increase your cost of ownership and decrease technology technical stack harmony (meaning using too many vendor products unnecessarily)).

Imagine transforming part of your workforce from writing reports to being able to have conversations with the data without A) Having to move it elsewhere B) Having to spend a lot of time writing complex queries and designing intricate reports. Their workload could shift from designing and building reports, to tuning the data model and talking with the data, and this could then be expanded to end users over time, where internal teams would then focus on data model tuning, and everyone else is just talking with the data.

Additionally, this would all be happening within the secure boundaries of your OCI tenancy, reducing concerns around privacy and security that often worries the mind!

Real life examples for those using ERP and HCM that could be made possible in the near future:

%sql

SELECT AI how many invoices are past due

SELECT AI how many suppliers do we consistently not pay on time, and what are the reasons

SELECT AI how many expenses will be past due by next week

SELECT AI how many people under my organization may retire over the next 5 years

SELECT AI how many people under my organization will lose vacation by end of year

No more reports, just conversations with the data..!

Monday, March 18, 2024

Oracle Fusion Cloud - BIP Performance Tuning Tips and Documentation

During the early days of Oracle Cloud being adopted relative to SaaS technologies like ERP and HCM, it was quite common to develop extracts and reports using complex custom sql data models that would either be downloaded by users or scheduled to extract data and interface it to external systems. Overtime, Oracle has released guidelines and best practices to follow, and efforts like the SQL guardrails have emerged to prevent poor performing custom SQL from impacting environment performance and stability. To that end, I have been aggregating useful links to documentation around this topic from our interactions with Oracle Support over the past few months, which are consolidated in this post.

Links to Documentation:

For scheduled reports, Oracle recommends the following guidelines:

  • Having a temporary backlog (wait queue) is expected behavior, as long as the backlog get cleared over the next 24 hours.
  • If the customer expect the jobs to get picked up immediately, submit via ‘online’ and wait – as long as they not hit 500 sec limit.
  • If there are any jobs that need to be processed with high priority (over the rest), it's advised to mark reports as ‘critical’ so that they picked up by the first available thread.
  • Oracle advises customers to tune their custom reports so that they complete faster and not hold threads for long time.
  • Oracle advises customers schedule less impactful jobs during off-peak or weekend hours – manage scheduler resource smart.
Additionally, note the following:
  • With Release 13 all configuration values including BI Publisher memory guard settings are preset based on your earlier Pod sizing request and cannot be changed.
  • For memory guard the Oracle SaaS performance team has calculated and set the largest values that still provide a robust and stable reporting environment for all users to meet business requirements.
  • The BI Service must support many concurrent users and these settings act as guard rails so an individual report cannot disrupt your entire service and impact the business.
Ultimately, effective instance management is critical for ensuring that your Cloud HCM system is running smoothly and effectively. Allocating resources based on the usage and demand will require co-ordination with various teams. There is a common misunderstanding that each HCM tool such as HDL, HCM extracts, or manual ESS job submissions operates on its own pool of threads. However, in reality, they all share the same ESS pool of threads. It is, therefore, advisable for customers to properly maintain and optimize their runbook to avoid overburdening the system and creating resource constraints.

Lastly, depending on the size of your pods, you have the option to allocate pods for specific tasks. For example:
  • BulkLoading/ Performance testing/Payroll parallel runs: Pod with highest threads is a good candidate be utilized for bulk data loading, payroll parallel runs, and similar resource-intensive tasks such as performance testing.
The below graphic shows how ESS Threads are consumed, to exemplify the statements made prior: