Search This Blog

Monday, March 18, 2024

Oracle Fusion Cloud - BIP Performance Tuning Tips and Documentation

During the early days of Oracle Cloud being adopted relative to SaaS technologies like ERP and HCM, it was quite common to develop extracts and reports using complex custom sql data models that would either be downloaded by users or scheduled to extract data and interface it to external systems. Overtime, Oracle has released guidelines and best practices to follow, and efforts like the SQL guardrails have emerged to prevent poor performing custom SQL from impacting environment performance and stability. To that end, I have been aggregating useful links to documentation around this topic from our interactions with Oracle Support over the past few months, which are consolidated in this post.

Links to Documentation:

For scheduled reports, Oracle recommends the following guidelines:

  • Having a temporary backlog (wait queue) is expected behavior, as long as the backlog get cleared over the next 24 hours.
  • If the customer expect the jobs to get picked up immediately, submit via ‘online’ and wait – as long as they not hit 500 sec limit.
  • If there are any jobs that need to be processed with high priority (over the rest), it's advised to mark reports as ‘critical’ so that they picked up by the first available thread.
  • Oracle advises customers to tune their custom reports so that they complete faster and not hold threads for long time.
  • Oracle advises customers schedule less impactful jobs during off-peak or weekend hours – manage scheduler resource smart.
Additionally, note the following:
  • With Release 13 all configuration values including BI Publisher memory guard settings are preset based on your earlier Pod sizing request and cannot be changed.
  • For memory guard the Oracle SaaS performance team has calculated and set the largest values that still provide a robust and stable reporting environment for all users to meet business requirements.
  • The BI Service must support many concurrent users and these settings act as guard rails so an individual report cannot disrupt your entire service and impact the business.
Ultimately, effective instance management is critical for ensuring that your Cloud HCM system is running smoothly and effectively. Allocating resources based on the usage and demand will require co-ordination with various teams. There is a common misunderstanding that each HCM tool such as HDL, HCM extracts, or manual ESS job submissions operates on its own pool of threads. However, in reality, they all share the same ESS pool of threads. It is, therefore, advisable for customers to properly maintain and optimize their runbook to avoid overburdening the system and creating resource constraints.

Lastly, depending on the size of your pods, you have the option to allocate pods for specific tasks. For example:
  • BulkLoading/ Performance testing/Payroll parallel runs: Pod with highest threads is a good candidate be utilized for bulk data loading, payroll parallel runs, and similar resource-intensive tasks such as performance testing.
The below graphic shows how ESS Threads are consumed, to exemplify the statements made prior:



No comments:

Post a Comment