Feed aggregator

CSV scripts worked well with 12c but in 19c, results are differents - spaces are added before and after value !

Tom Kyte - 3 hours 40 min ago
Hi all, It's first time post. We are facing a problem in 19c when generating a csv file. In 12c, results were as expected. Spaces are added after or before the content of the column. Please find attached the information about the environment. -- tables structure Column Name Type TABLE ----------------- -------- ----------------------------- CODE_PORTEFEUILLE CHAR(6) EXPIT.inv_pool_prod_tmp HISTO_DATE_VALORISATION NUMBER(6) GP2PROD.gestion_valorisation -- conversion date function <code>CREATE FUNCTION "GP2PROD"."CONVERTION_DATE" (cd_date number) return date is cd_date2 date; BEGIN cd_date2 := TO_DATE('14/09/1752','DD/MM/YYYY') + abs(cd_date); return (cd_date2); END;</code> -- script <code>set heading off pagesize 0 linesize 0 set TRIMSPOOL ON set RECSEP off set verify off CREATE TABLE inv_pool_prod_tmp as select distinct dp.code_portefeuille from descriptif_comp_portefeuille dcp, descriptif_portefeuille dp, contenu_ensemble_port cep where dcp.flag_pool <> ' ' and dcp.code_portefeuille = dp.code_portefeuille and dp.objectif_portefeuille <> ' ' and cep.code_portefeuille =dp.code_portefeuille and cep.code_ensemble_port <> 'OPCLIQ' group by dp.code_portefeuille order by dp.code_portefeuille; spool $GPFETAT/ctrl_nbr_pool_prod.csv select B.CODE_PORTEFEUILLE, ';', convertion_date(max(B.HISTO_DATE_VALORISATION)) as premiere_valo from inv_pool_prod_tmp a, gestion_valorisation b where a.code_portefeuille = b.code_portefeuille group by b.code_portefeuille order by b.code_portefeuille; spool off; quit;</code> -- as you can see, saces are added after value into first column and before value into second column. <code>more ctrl_nbr_pool_prod.csv</code> 100801 ; 31-DEC-2014 100804 ; 31-DEC-2014 100805 ; 31-DEC-2014 100806 ; 31-DEC-2014 100809 ; 31-DEC-2014 100810 ; 31-DEC-2014 100811 ; 14-JUN-2016 100812 ; 14-JUN-2016 100813 ; 14-JUN-2016 100814 ; 14-JUN-2016 100815 ; 30-JUN-2016 100816 ; 30-JUN-2016 100817 ; 01-JUN-2017 126401 ; 10-NOV-2017 Thanks in advance for you reply.
Categories: DBA Blogs

Optimizer Tip

Jonathan Lewis - 13 hours 1 min ago

This is a note I drafted in March 2016, starting with a comment that it had been about that time the previous year that I had written:

I’ve just responded to the call for items for the “IOUG Quick Tips” booklet for 2015 – so it’s probably about time to post the quick tip that I put into the 2014 issue. It’s probably nothing new to most readers of the blog, but sometimes an old thing presented in a new way offers fresh insights or better comprehension.

I keep finding ancient drafts like this (there are still more than 730 drafts on my blog at present – which means one per day for the next 2 years!) and if they still seem relevant – even if they are a little behind the times – I’ve taken to dusting them down and publishing.

With the passing of time, though, new information becomes available, algorithms change, and (occasionally) I discover I’ve made a significant error in my inferences. In this case  there are a couple of important additions that I’ve added to the end of the note.

Optimizer Tips (IOUG Quick Tips 2015)

There are two very common reasons why the optimizer picks a bad execution plan. The first is that its estimate of the required data volume is bad, the second is that it has a misleading impression of how scattered that data is.

The first issue is often due to problems with the selectivity of complex predicates, the second to unsuitable values for the clustering_factor of potentially useful indexes. Recent [ed: i.e. pre-2015] versions of the Oracle software have given us features that try to address both these issues, and I’m going to comment on them in the following note.

As always, any change can have side effects – introducing a new feature might have no effect on 99% of what we do, a beneficial effect on 99% of the remainder, and a hideous effect on the 1% of 1% that’s left, so I will be commenting on both the pros and cons of both features.

Column Group Stats

The optimizer assumes that the data in two different columns of a single table are independent – for example the registration number on your car (probably) has nothing to do with the account number of your bank account. So when we execute queries like:

     colX = 'abcd'
and  colY = 'wxyz'

the optimizer’s calculations will be something like:

“one row in 5,000 is likely to have colX = ‘abcd’ and one row in 2,000 is likely to have colY = ‘wxyz’, so the combination will probably appear in roughly one row in a million”.

On the other hand we often find tables that do things like storing post codes (zipcodes) in one column and city names in another, and there’s a strong correlation between post codes and city – for example the district code (first part of the post code) “OX1” will be in the city of Oxford (Oxfordshire, UK). So if we query a table of addresses for rows where:

     district_code = 'OX1'
and  city          = 'Oxford

there’s a degree of redundancy, but the optimizer will multiple the total number of distinct district codes in the UK by the total number of distinct city names in the UK as it tries to work out the number of addresses that match the combined predicate and will come up with a result that is far too small.

In cases like this we can define “column group” statistics about combinations of columns that we query together, using the function dbms_stats.create_extended_stats(). This function will create a type of virtual column for a table and report the system-generated name back to us, and we will be able to see that name in the view user_tab_cols, and the definition in the view user_stat_extensions. If we define a column group in this way we then need to gather stats on it, which we can do in one of two ways, either by using the generated name or by using the expression that created it.


SQL> create table addresses (district_code varchar2(8), city varchar2(40));

Table created.

SQL> execute dbms_output.put_line( - 
>        dbms_stats.create_extended_stats( - 
>            user,'addresses','(district_code, city)'))

SYS_STU12RZM_07240SN3V2667EQLW

PL/SQL procedure successfully completed.

begin
        dbms_stats.gather_table_stats(
                user, 'addresses',
                method_opt => 'for columns SYS_STU12RZM_07240SN3V2667EQLW size 1'
        );
        dbms_stats.gather_table_stats(
                user, 'addresses',
                method_opt => 'for columns (district_code, city) size 1'
        );
end;
/

I’ve included both options in the anonymous pl/sql block, but you only need one of them. In fact if you use the second one without calling create_extended_stats() first Oracle will create the column group implicitly, but you won’t know what it’s called until you query user_stat_extensions.

I’ve limited the stats collection to basic stats with the “size 1” option. You can collect a histogram on a column group but since the optimizer can only use a column group with equality predicates you should only create a histogram in the special cases where you know that you’re going to get a frequency histogram or “Top-N” histogram.

You can also define extended stats on expressions (e.g. trunc(delivery-date) – trunc(collection_date)) rather than column groups, but since you’re only allowed 20 column groups per table [see update 1] it would be better to use virtual columns for expressions since you can have as many virtual columns you like on a table provided the total column count stays below the limit of 1,000 columns per table.

Warnings
  • Column group statistics are only used for equality expressions. [see also update 2]
  • Column group statistics will not be used if you’ve created a histogram on any of the underlying columns unless there’s also a histogam on the column group itself.
  • Column group statistics will not be used if you query any of the underlying columns with an “out of range” value. This, perhaps, is the biggest instability threat with column groups. As time passes and new data appears you may find people querying the new data. If you haven’t kept the column stats up to date then plans can change dramatically as the optimizer switches from using column group stats to multiplying the selectivities of underlying columns.
  • The final warning arrives with 12c. If you have all the adaptive optimizer options enabled the optimizer will keep a look out for tables that it thinks could do with column group stats, and automatically creates them the next time you gather stats on the table. In principle this shouldn’t be a problem – the optimizer should only do this when it has seen that column stats should improve performance – but you might want to monitor your system for the arrival of new automatic columns.
Preference: table_cache_history

Even when the cardinality estimates are correct we may find that we get an inefficient execution plan because the optimizer doesn’t want to use an index that we think would be a really good choice. A common reason for this failure is that the clustering_factor on the index is unrealistically large.

The clustering_factor of an index is a measure of how randomly you will jump around the table as you do an index range scan through the index – and the algorithm Oracle uses to calculate this number has a serious flaw in it: it can’t tell the difference between a little bit of localised jumping and constant random leaps across the entire width of the table.

To calculate the clustering_factor Oracle basically walks the index in order using the rowid at the end of each index entry to check which table block it has to visit, and every time it has to visit a “new” table block it increments a counter. The trouble with this approach is that, by default, it doesn’t remember its recent history so, for example, it can’t tell the difference in quality between the following two sequences of table block visits:

Block 612, block 87, block 154, block 3,  block 1386, block 834, block 237
Block 98,  block 99, block 98,  block 99, block 98,   block 99,  block 98

In both cases Oracle would say that it had visited seven different blocks and the data was badly scattered. This has always been a problem, but it became much more of a problem when Oracle introduced ASSM (automatic segment space management). The point of ASSM is to ensure that concurrent inserts from different sessions tend to use different table blocks, the aim being to reduce contention due to buffer busy waits. As we’ve just seen, though, the clustering_factor doesn’t differentiate between “a little bit of scatter” and “a totally random disaster area”.

Oracle finally addressed this problem by introducing a “table preference” which allows you to tell it to “remember history” when calculating the clustering_factor. So, for example, a call like this:

execute dbms_stats.set_table_prefs(user,'t1','table_cached_blocks',16)

would tell Oracle that the next time you collect statistics on any indexes on table t1 the code to calculate the clustering_factor should remember the last 16 table blocks it had “visited” and not increment the counter if the “next” block was already in that list.

If you look at the two samples above, this means the counter for the first list of blocks would reach 7 while the counter for the second list would only reach 2. Suddenly the optimizer will be able to tell the difference between data that is “locally” scattered and data that really is randomly scattered. You and the optimizer may finally agree on what constitutes a good index.

It’s hard to say whether there’s a proper “default” value for this preference. If you’re using ASSM (and there can’t be many people left who aren’t) then the obvious choice for the parameter would be 16 since ASSM tends to format 16 consecutive blocks at a time when a segment needs to make more space available for users [but see Update 3]. However, if you know that the real level of insert concurrency on a table is higher than 16 then you might be better off setting the value to match the known level of concurrency.

Are there any special risks to setting this preference to a value like 16? I don’t think so; it’s going to result in plans changing, of course, but indexes which should have a large clustering_factor should still end up with a large clustering_factor after setting the preference and gathering of statistics; the indexes that ought to have a low clustering_factor are the ones most likely to change, and change in the right direction.

Footnote: “Danger, Will Robinson”.

I’ve highlighted two features that are incredibly useful as tools to give the optimizer better information about your data and allow it to get better execution plans with less manual intervention. The usual warning applies, though: “if you want to get there, you don’t want to start from here”. When you manipulate the information the optimizer is using it will give you some new plans; better information will normally result in better plans but it is almost inevitable that some of your current queries are running efficiently “by accident” (possibly because of bugs) and the new code paths will result in some plans changing for the worse.

Clearly it is necessary to do some thorough testing but fortunately both features are incremental and any changes can be backed out very quickly and easily. We can change the “table_cached_blocks” one table at a time (or even, with a little manual intervention, one index at a time) and watch the effects; we can add column groups one at a time and watch for side effects. All it takes to back out of a change is a call to gather index stats, or a call to drop extended stats. It’s never nice to live through change – especially change that can have a dramatic impact – but if we find after going to production that we missed a problem with our testing we can reverse the changes very quickly.

Updates

Update 1 – 20 sets of extended stats. In fact the limit is the larger of 20 and ceiling(column count/10), and the way the arithmetic is applied is a little odd so there are ways to hack around the limit.

Update 2 – Column groups and equality. It’s worth a special menton that the predicate colX is null is not an equality predicate, and column group stats will not apply but there can be unexpected side effects even for cases where you don’t use this “is null” predicate.

Update 3 – table_cache_history = 16. This suggestions doesn’t allow for systems running RAC.

Certified Kubernetes Administrator | Day 1 & Day 2 Training Concepts

Online Apps DBA - Sat, 2021-09-18 05:00

K8s Architecture, Components, Installation, and Networking Kubernetes is an open-source conainer tool for orchestration. It can automate the processes such as deploying, and managing containerized applications. Kubernetes follows a very straightforward architecture with flexibility. It consists of master nodes and worker nodes. Master communicates with the worker node with the help of API servers. Kubernetes […]

The post Certified Kubernetes Administrator | Day 1 & Day 2 Training Concepts appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Surpriseeee : )

H.Tonguç Yılmaz - Sat, 2021-09-18 00:12
Let’s make yet another fresh start here, sharing platform became Linkedin for me for the last years but I really missed writing my experiences in detail, let’s start with the recent changes for me: 1. We left İstanbul 4 years ago and moved to my home town Fethiye, started working remotely with my wife long…More

Introduction to Azure SQL Database For Beginners & Steps to Deploy

Online Apps DBA - Fri, 2021-09-17 05:45

There are very few relational database systems as established and widely used as Microsoft’s SQL Server. SQL Server on Microsoft Azure comes in 3 different types (commonly known as the Azure SQL family): 1. SQL Server on Azure VM (IaaS) 2. Azure SQL Database (PaaS) 3. Azure SQL Managed Instance (PaaS). The IaaS offering, SQL […]

The post Introduction to Azure SQL Database For Beginners & Steps to Deploy appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Schedule Tasks Using SQL Server Agent

Online Apps DBA - Fri, 2021-09-17 04:04

In this blog, we are going to cover Schedule Tasks Using SQL Server Agent In Database systems that need regular maintenance, which includes tasks like making backups and updating statistics. Maintenance may also include regularly scheduled jobs that execute against a database. What is Task Scheduling? Database systems need regular maintenance, which includes tasks like […]

The post Schedule Tasks Using SQL Server Agent appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Introduction to Azure SQL Database For Beginners & Steps to Deploy

Online Apps DBA - Fri, 2021-09-17 00:53

There are very few relational database systems as established and widely used as Microsoft’s SQL Server. SQL Server on Microsoft Azure comes in 3 different types (commonly known as the Azure SQL family): 1. SQL Server on Azure VM (IaaS) 2. Azure SQL Database (PaaS) 3. Azure SQL Managed Instance (PaaS). The IaaS offering, SQL […]

The post Introduction to Azure SQL Database For Beginners & Steps to Deploy appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Happy 17th Birthday to this Oracle Security Blog

Pete Finnigan - Thu, 2021-09-16 14:46
It is almost 17 years since I started this blog on the 20th of September 2004. I had actually already been sort of blogging without blog software before that since 10th February 2004 with my ramblings section of my website....[Read More]

Posted by Pete On 16/09/21 At 11:24 AM

Categories: Security Blogs

CLOB column over DB Link

Tom Kyte - Wed, 2021-09-15 22:26
Hi Tom, We have a query which has to get a set of rows over the db link which has a CONTAINS predicate on a column of CLOB. E.g. There is a DB A and DB B with table T1( c1 varchar2(10),c2 clob) . I want to run a query from DB A using dblink on DB B to query T1 which has a CONTAINS predicate on c2 and based on the query the rows have to return c1 from t1. Can you suggest anyway in which we can get this done. We have a couple of restrictions though: WE cant insert into DB A from DB B. Table T1 contains around a 100 thousand rows and has around 20 columns. Thanks Ramkumar
Categories: DBA Blogs

[Recap] Oracle Cloud Infrastructure Architect Associate | Training Day 2 | Networking Concepts and Overview

Online Apps DBA - Wed, 2021-09-15 06:09

 Oracle Cloud Infrastructure is a broad platform of public cloud services that allows customers to build and run wide range of applications.  Oracle Cloud Infrastructure Architect has a deep understanding of cloud and provides solutions on Oracle Infrastructure and services.  Responsibilities ➽ Advising stakeholders and translating business requirements into secure, scalable, and reliable cloud solutions.  Skills ➽ […]

The post [Recap] Oracle Cloud Infrastructure Architect Associate | Training Day 2 | Networking Concepts and Overview appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[Recap] Oracle Cloud Infrastructure Architect Associate | Training Day 1 | Identity and Access Management

Online Apps DBA - Wed, 2021-09-15 05:53

 Oracle Cloud Infrastructure is a deep and broad platform of public cloud services enabling customers to build and run a wide range of applications.  Oracle Cloud Infrastructure Architect has a deep understanding of cloud and provides solutions on Oracle Infrastructure and services.  Responsibilities ➽ Advising stakeholders and translating business requirements into secure, scalable, and reliable cloud […]

The post [Recap] Oracle Cloud Infrastructure Architect Associate | Training Day 1 | Identity and Access Management appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Using Buildkite to perform Snyk Open Source and Snyk Code (SAST) tests

Pas Apicella - Tue, 2021-09-14 23:15

Buildkite is a platform for running fast, secure, and scalable continuous integration pipelines on your own infrastructure. In the example below I will run my Buildkite pipeline on my Macbook to perform two Snyk Tests, one for Open-Source dependancies and the other a SAST test of the code itself.

Snyk is an open source security platform designed to help software-driven businesses enhance developer security.

You will need an account on Snyk and Buildkite to follow the steps below.

Steps

1. First in Snyk let's create a Service Account which will be the Snyk token I will use to authenticate with. You can use the Snyk API Token but the service account is all you need to run "Snyk Tests" so makes sense to use that.


2. Next let's store that Service Account token somewhere where I can safely inject that into my pipeline at the appropriate step. In this example I am using "Google Secret Manager" but there are other choices of course.


Note: We will be using the secret NAME shortly "PAS_BUILDKITE_SA_SNYK_TOKEN"

3. You will need a Buildkite agent on your local Infrastructure in my case I using my Macbook so that's done as follows

https://buildkite.com/docs/agent/v3/macos 

pasapicella@192-168-1-113:~/demos/integrations/buildkite$ ./start-agent.sh

   _           _ _     _ _    _ _                                _
  | |         (_) |   | | |  (_) |                              | |
  | |__  _   _ _| | __| | | ___| |_ ___    __ _  __ _  ___ _ __ | |_
  | '_ \| | | | | |/ _` | |/ / | __/ _ \  / _` |/ _` |/ _ \ '_ \| __|
  | |_) | |_| | | | (_| |   <| | ||  __/ | (_| | (_| |  __/ | | | |_
  |_.__/ \__,_|_|_|\__,_|_|\_\_|\__\___|  \__,_|\__, |\___|_| |_|\__|
                                                 __/ |
 https://buildkite.com/agent                    |___/

2021-09-15 11:09:33 NOTICE Starting buildkite-agent v3.32.3 with PID: 50130
2021-09-15 11:09:33 NOTICE The agent source code can be found here: https://github.com/buildkite/agent
2021-09-15 11:09:33 NOTICE For questions and support, email us at: hello@buildkite.com
2021-09-15 11:09:33 INFO   Configuration loaded path=/usr/local/etc/buildkite-agent/buildkite-agent.cfg
2021-09-15 11:09:33 INFO   Registering agent with Buildkite...
2021-09-15 11:09:35 INFO   Successfully registered agent "y.y.y.y.tpgi.com.au-1" with tags []
2021-09-15 11:09:35 INFO   Starting 1 Agent(s)
2021-09-15 11:09:35 INFO   You can press Ctrl-C to stop the agents

4. You're now ready to create a pipeline. A pipeline is a template of the steps you want to run. There are many types of steps, some run scripts, some define conditional logic, and others wait for user input. When you run a pipeline, a build is created. Each of the steps in the pipeline end up as jobs in the build, which then get distributed to available agents.

In the example below our pipeline is created from a GitHub repo and then select the default branch. At that point incoming webhooks are sent to Buildkite by source control providers (GitHub, GitLab, Bitbucket, etc.) to trigger builds, in this scenario we using GitHub


5. Let's go ahead and actually just edit the build steps using YAML. My final YAML is as follows and I explain below why it looks this way but in short I just want to run two snyk tests rather then actually deploy anything for this demo.

steps:
  - commands:
      - "snyk config set api=$$SNYK_SA_TOKEN_VAR"
      - "snyk test --severity-threshold=$$SEVERITY_THRESHOLD"
      - "snyk code test --org=$$SNYK_ORG"
    plugins:
      - avaly/gcp-secret-manager#v1.0.0:
          credentials_file: /Users/pasapicella/snyk/clouds/gcp/buildkite-secrets-gcp.json
          env:
            SNYK_SA_TOKEN_VAR: PAS_BUILDKITE_SA_SNYK_TOKEN
    env:
      SEVERITY_THRESHOLD: "critical"
      SNYK_ORG: "pas.apicella-41p"
    label: "Employee API Snyk Test"

Few things to note here:

  • I am using a GCP secret manager plugin to retrieve my Snyk SA token with a name as follows "PAS_BUILDKITE_SA_SNYK_TOKEN"
  • I am using a Google Service Account JSON so I can authenticate with GCP and retrieve my secret "SNYK_SA_TOKEN_VAR", you will need to use a Service Account with privileges to at least READ from Google Secret Manager
  • I am using some local non sensitive ENV variables which get used at the appropriate time
  • I have three commands of which the first command sets my Snyk API token for the Snyk CLI
  • I have not installed the Snyk CLI because it already exists on my Macbook 
  • I am only looking for my Snyk tests to fail if it finds any CRITICAL issues only
  • I should be running a "mvn package" here but I can still execute a "snyk test" without it for demo purposes as we have a pom..xml
  • I could also build a container in the pipeline from the source code and then run a "snyk container test" as well, in fact I could even run "snyk iac test" against any IaC files in the repo as well
  • If a test fails we can easily run "snyk monitor" to load the results into the Snyk App but for this demo we don't do that

6. Now we can manually run a build or wait for triggering event on our repo, here is some screen shots of what it looks like including some failures where we find vulnerabilities in a separate node.js repo





 

It makes more sense to create a Buildkite plugin for Snyk rather than execute commands using a script and here is an example of one below. Having said that the commands you run to execute a "snyk test" are simple enough to include in the pipeline YML without the need for a plugin here especially if you have infrastructure already setup with the ability to run the "snyk cli". A plugin would be the right approach here though as per the example below.

https://github.com/seek-oss/snyk-buildkite-plugin


Hopefully you have seen how easy it is to continuously avoid known vulnerabilities in your dependencies and code, by integrating Snyk into your continuous integration pipeline with Buildkite.

More Information
Snyk
Categories: Fusion Middleware

Register for a Free Webinar with PFCLForensics for Breached Oracle Databases

Pete Finnigan - Tue, 2021-09-14 07:46
I will be giving a free webinar hosted with our reseller/distributer in Slovenia and the Balkans region - Palsit . The free webinar is at 09:00 UK time or 10:00 CET time on the 22nd September 2021. In this webinar....[Read More]

Posted by Pete On 14/09/21 At 01:28 PM

Categories: Security Blogs

How to Call Fusion REST api in PLSQL using apex web service

Tom Kyte - Mon, 2021-09-13 15:26
Hi, I'm trying to consume Oracle fusion rest web service's in PLSQL using apex_web_service. I'm getting the below error. could you please guide me on this. also Pls suggest how to enable the basic authentication in web service like how to set the basic auth. ORA-20987: APEX - One or more cookies set in apex_util.g_request_cookies contain an invalid value. - Contact your application administrator. Details about this incident are available via debug id "49001 <code> CREATE OR REPLACE FUNCTION ATP_REST RETURN CLOB AS l_clob CLOB; l_result VARCHAR2(32767); BEGIN APEX_WEB_SERVICE.g_request_cookies.delete(); APEX_WEB_SERVICE.g_request_cookies(1).name := '';---i'm passing the username (baisc auth) APEX_WEB_SERVICE.g_request_cookies(1).value := '';---pwd l_clob := APEX_WEB_SERVICE.make_rest_request( p_url => 'https://ehpv-dev8.fa.em2.oraclecloud.com/bpm/api/4.0/tasks', p_http_method => 'GET' -- p_parm_name => APEX_UTIL.string_to_table('p_int_1:p_int_2'), -- p_parm_value => APEX_UTIL.string_to_table(p_int_1 || ':' || p_int_2) ); DBMS_OUTPUT.put_line('l_clob=' || l_clob); RETURN l_clob; END; / </code> Regards, Praveen Paulraj
Categories: DBA Blogs

Oracle 12c EMON processing features

Tom Kyte - Mon, 2021-09-13 15:26
I read the article, titled: <b>Event Monitor Process: Architecture and Known Issues (Doc ID 105067.1)</b> ? Notification implements a high-watermark scheme where when the backlog of notification events hit an in-memory limit the messages are, in the case of 9.2, spilled onto disk into the SYS.AQ_EVENT_TABLE_Q queue if a watermark has been set. In 9.2 the default value for the watermark is 0 which means that the no messages will spill onto disk and the entire shared pool could be used for notification events. In 10.1 onwards the procedures DBMS_AQADM.GET_WATERMARK and DBMS_AQADM.SET_WATERMARK are available to set the amount of memory available for notification events but the messages are no longer spilled onto disk. Instead the enqueueing process are subject to flow control until the backlog has been cleared by the emon process. ? What does it mean for me as a developer? I?ve a 12.2 EE database under Linux(64 bit). I registered a notification callback procedure. How can I know , when enqueuing a message to the queue, that my message will be approved by EMON, or it will be placed to the backlog? And why should I know that? Does ?until the backlog has been cleared by the emon process? mean that entire backlog will be lost without any further processing by EMON? I noticed many times that there are messages in the queue and in the same time EMON?s job PLSQL_NTFN hangs in state ?Waiting for messages in the queue?. And these messages seems to be never processed. May be it is the case of clearing backlog? And where can I find an information (in server logs, in database objects) about clearing backlog and about MSGId?s of messages that had been cleared? TIA, Andrew.
Categories: DBA Blogs

INLIST ITERATOR

Tom Kyte - Mon, 2021-09-13 15:26
Hi Tom, Some questions about the SQL tuning. 1) I found that when using "IN" in the where clause , INLIST ITERATOR is shown on the explain plan in a cost-based database (and using the index correctly, the response is fast). However, no such INLIST ITERATOR in rule-based (and using the full table scan, the response is slow). Is INLIST ITERATOR only occur on cost-based? Is it possible to force the optimizer to use INLIST ITERATOR in a rule-based database (without any hints added to the SQL statement or using alter session set optimizer_mode = choose)? Or is it possible to rewrite the "IN" to other forms such that the index can be used in rule-based database? I have tried to rewrite "IN" to "OR" but the index still cannot be used. The only way the index can be used is using UNION ALL the values of "IN". 2) If the database is rule-based (optimizer_mode=rule), and the table has statistics, will Oracle use cost-based to answer the query? I rememeber that Oracle will use rule-based if the optimizer_mode is set to rule (from Oracle documentation), no matter whether the table has statistics. But I found that in some situation Oracle will use cost-based. Thanks, David
Categories: DBA Blogs

Defaulting Argument Values in a BASH script

The Anti-Kyte - Mon, 2021-09-13 13:49

I recently found out (again) how to default argument values in a shell script, so I thought I’d write it down this time.
The ABBA flavour in what follows is because we’re talking about BASH – the Bourne Again SHell and Bjorn Again are an ABBA tribute band. This is quite handy because the ABBA back catalogue is rather versatile having already lent itself to an explanation Oracle password complexity.

Welcome to the inside of my head. Sorry about the mess…

Let’s start with a simple script – abba.sh :

#!/bin/bash

if [ $# = 0 ];  then
    echo 'No argument values passed in'
else
    TRACK=$1
    echo $TRACK
fi;

…which we now make executable :

chmod u+x abba.sh

Run this – first with no arguments passed in and then with a single argument and we get :

If we want to default the value of $TRACK (and let’s face it, who doesn’t love a bit of Dancing Queen), we can do the following (saved as abba2.sh) …

#!/bin/bash

TRACK="${1:-Dancing Queen}"
if [ $# != 1 ];  then
    echo 'No argument values passed in'
fi;
echo $TRACK

Now, when we run this, we can see that it’ll accept an argument as before. However, if no argument is passed in the argument count is unaffected but the variable is initialized to it’s default value :

Migrate Your Relational Database To Azure

Online Apps DBA - Mon, 2021-09-13 03:48

In this blog, we are going to cover Migrate Your Relational Database To Azure. In Azure, you can migrate your database servers directly to IaaS VMs (pure lift and shift), or you can migrate to Azure SQL Database, for additional benefits. Azure SQL Database offers the managed instance and full database-as-a-service (DBaaS) options. What Is […]

The post Migrate Your Relational Database To Azure appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

FastAPI on Kubernetes with NGINX Ingress

Andrejus Baranovski - Mon, 2021-09-13 03:29
A simple tutorial about a complex thing - how to expose FastAPI app to the world on Kubernets with NGINX Ingress Controller. I explain the structure of Kubernetes Pod for FastAPI along with Kubernetes service. I show how FastAPI properties should be set to be accessible through Ingress path definition. You will learn how to check the log for NGINX Ingress Controller and FastAPI Pod.

 

How to Become a Successful AWS DevOps Engineer

Online Apps DBA - Mon, 2021-09-13 01:51

DevOps is becoming the trend in the cloud, and the need for DevOps professionals is also growing with time DevOps Engineer ➽ DevOps engineers are IT professionals who introduce processes, tools, and methodologies to complete the need of the software development life cycle, from coding or deployment to maintenance and updates.  Roles and Responsibilities ➽ DevOps Engineers […]

The post How to Become a Successful AWS DevOps Engineer appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator