Tom Kyte

Subscribe to Tom Kyte feed Tom Kyte
These are the most recently asked questions on Ask Tom
Updated: 9 hours 41 min ago

Help to understand this recursive query

Thu, 2021-10-14 22:06
Hi Tom, I am not able to fully understand below recursive query. Especially, the condition "WHERE POS > 0" and " WHERE POS = 0". my understanding is pos is result of REGEXP_INSTR (STR, '[A-Z][0-9]{5}',1, RN). 1. if the first POS > 0, how could the final condition has pos = 0. 2. what's difference between pos > o and regexp_instr(str, '[A-Z][0-9]{5}', 1, rn)>0. regexp_instr(str, '[A-Z][0-9]{5}', 1, rn)>0 doesn't give expected result. thanks in advance. <code>WITH T AS (SELECT '///2E/A12345/E53421$$@#/A123456*#/A123456**4E53421$$$$$@@!!!' STR FROM DUAL UNION ALL SELECT 'ABC' FROM DUAL UNION ALL SELECT 'ABC67890' FROM DUAL UNION ALL SELECT 'ABC67890XYZ345679' FROM DUAL UNION ALL SELECT 'ABC67890XYZ345679UVW384453' FROM DUAL UNION ALL SELECT 'ABC67890XYZ345679UVW384453' || 'ABC67890XYZ345679' FROM DUAL UNION ALL SELECT 'ABC67890' || 'ABC67890' || 'ABC67890' || 'ABC67890' || 'ABC67890' || 'ABC67890' || 'ABC67890' FROM DUAL), TMP (RN, POS, STR, STR0) AS (SELECT 1, 1, STR, STR FROM T UNION ALL SELECT RN + 1, REGEXP_INSTR (STR, '[A-Z][0-9]{5}', 1, RN), REGEXP_REPLACE (STR, SUBSTR (STR, REGEXP_INSTR (STR, '[A-Z][0-9]{5}', 1, RN), 6), '', REGEXP_INSTR (STR, '[A-Z][0-9]{5}', 1, RN) + 6), STR0 FROM TMP WHERE POS > 0) SELECT * FROM TMP WHERE POS = 0</code>
Categories: DBA Blogs

begin_time/end_time in DBA_HIST_SNAPSHOT vs DBA_HIST_SYSMETRIC_SUMMARY vs AWR report

Wed, 2021-10-13 09:26
Hello Connor & Chris :-), I have some confusion about the <b>begin_time/end_time</b> in <b>DBA_HIST_SNAPSHOT</b> vs <b>DBA_HIST_SYSMETRIC_SUMMARY</b> vs <b>AWR report</b>. <code> 13:52:22 SYS@emcdb> desc dba_hist_snapshot Name Null? Type ----------------------------------------- -------- ---------------------------- SNAP_ID NOT NULL NUMBER DBID NOT NULL NUMBER INSTANCE_NUMBER NOT NULL NUMBER STARTUP_TIME NOT NULL TIMESTAMP(3) <b>BEGIN_INTERVAL_TIME NOT NULL TIMESTAMP(3) END_INTERVAL_TIME NOT NULL TIMESTAMP(3)</b> FLUSH_ELAPSED INTERVAL DAY(5) TO SECOND(1) SNAP_LEVEL NUMBER ERROR_COUNT NUMBER SNAP_FLAG NUMBER SNAP_TIMEZONE INTERVAL DAY(0) TO SECOND(0) BEGIN_INTERVAL_TIME_TZ TIMESTAMP(3) WITH TIME ZONE END_INTERVAL_TIME_TZ TIMESTAMP(3) WITH TIME ZONE CON_ID NUMBER </code> <code> 13:54:21 SYS@emcdb> desc dba_hist_sysmetric_summary Name Null? Type ----------------------------------------- -------- ---------------------------- SNAP_ID NOT NULL NUMBER DBID NOT NULL NUMBER INSTANCE_NUMBER NOT NULL NUMBER <b>BEGIN_TIME NOT NULL DATE END_TIME NOT NULL DATE</b> INTSIZE NOT NULL NUMBER GROUP_ID NOT NULL NUMBER METRIC_ID NOT NULL NUMBER METRIC_NAME NOT NULL VARCHAR2(64) METRIC_UNIT NOT NULL VARCHAR2(64) NUM_INTERVAL NOT NULL NUMBER MINVAL NOT NULL NUMBER MAXVAL NOT NULL NUMBER AVERAGE NOT NULL NUMBER STANDARD_DEVIATION NOT NULL NUMBER SUM_SQUARES NUMBER CON_DBID NUMBER CON_ID NUMBER </code> I understand that there have begin/end snap time in an AWR report, is it like this, begin_snap_time (AWR) = begin_interval_time (DBA_HIST_SNAPSHOT) and end_snap_time (AWR) = begin_interval_time (DBA_HIST_SNAPSHOT)? or begin_snap_time (AWR) = begin_time (DBA_HIST_SYSMETRIC_SUMMARY) and end_snap_time (AWR) = end_time (DBA_HIST_SYSMETRIC_SUMMARY)? You know, since a snap_id has a begin/end_interval_time but an AWR report usually captures between two snap_id, so now I'm not sure where is the begin/end snap time in an AWR report? Best Regards Quanwen Zhao
Categories: DBA Blogs

Database link : Relation of sessions between databases

Mon, 2021-10-11 21:06
Hello, In the context of database link, is there a possibility to get the relation between both sessions of database "A" and database "B"? (database "A" query objects from database "B" using a dblink) In other words, from database "B" can we retrieve the SID of database "A" that create the session in database "B" (and vis versa, from database "A" the SID of database "B" that is created for the dblink)? I hope that my question is understandable. Thank you very much, Sebastien.
Categories: DBA Blogs

Can kafka read uncommitted data from oracle database

Fri, 2021-10-08 01:26
Hi tom, Hope you are doing well. I want to ask you that, can kafka read uncommitted data form oracle database from a session where data is not committed?
Categories: DBA Blogs

Archivelog backup using RMAN

Fri, 2021-10-08 01:26
Hi I have a multi tenant 19c database below is the directory where my archives are generated /erpprd64/oradata/prd64cdb/fast_recovery_area/PRD64CDB/archivelog/ in this I have a folder with today's date 2021_10_05 and below are archives generated in it o1_mf_1_783_joqsfg35_.arc o1_mf_1_785_joqy8dk1_.arc o1_mf_1_787_jor31mh6_.arc o1_mf_1_789_jor7wz6n_.arc o1_mf_1_791_jorf9hoc_.arc o1_mf_1_784_joqw4oq3_.arc o1_mf_1_786_jor0d91w_.arc o1_mf_1_788_jor5681t_.arc o1_mf_1_790_jorbn5gh_.arc o1_mf_1_792_jorhyqhs_.arc is there a way to only backup (using rman) the archives generated in today's folder I need this because I will then create a backup script which will take rman backup of database archives generated today control file then using this backup I will refresh my database on test server please assist Thanks, Abdul Wahid Solanki
Categories: DBA Blogs

How to move the Jobs from one User to another User Present in the Different DB server

Fri, 2021-10-08 01:26
Team, This is is continuation with the below link : https://asktom.oracle.com/pls/apex/f?p=100:12::::RP:P12_QUESTION_ID,P12_PREV_PAGE:9543884800346428497,11 I could not continue on that link , so sorry to raise a new request. The solution provided there works well in case of both the DB users are in the same server. But now we are migrating to new DB user, we want to migrate the scheduled jobs as well. Please let us know if it can be done via SQl or PL/SQL. Any sample code on this is most welcome. Regards, Vinesh
Categories: DBA Blogs

Trying to stop export import job and get an error

Fri, 2021-10-08 01:26
hi tom i wanted to ask about interactive command(CTRL+C) when using datapump import export when i use an export utility and trying to kill the export jobs it works with no problem expdp test/asdzxc@pdb22 directory=asdzxc dumpfile=countries.dump reuse_dumpfiles=y schemas=test <code>Export: Release 19.0.0.0.0 - Production on Tue Oct 5 11:46:24 2021 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Starting "TEST"."SYS_EXPORT_SCHEMA_02": test/********@pdb22 directory=asdzxc dumpfile=countries.dump reuse_dumpfiles=y schemas=test Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Export> kill_job Are you sure you wish to stop this job ([yes]/no): yes</code> but when i try to stop the export job using stop_job i get an error <code>expdp test/asdzxc@pdb22 directory=asdzxc dumpfile=countries.dump reuse_dumpfiles=y schemas=test Export: Release 19.0.0.0.0 - Production on Tue Oct 5 11:50:08 2021 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Starting "TEST"."SYS_EXPORT_SCHEMA_02": test/********@pdb22 directory=asdzxc dumpfile=countries.dump reuse_dumpfiles=y schemas=test Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Export> stop_job UDE-31626: operation generated ORACLE error 31626 ORA-31626: job does not exist ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79 ORA-06512: at "SYS.DBMS_DATAPUMP", line 1849 ORA-06512: at "SYS.DBMS_DATAPUMP", line 4791 ORA-06512: at "SYS.DBMS_DATAPUMP", line 6204 ORA-06512: at line 1 </code> how does this error happen and how to fix it
Categories: DBA Blogs

Undo blocks stored in the redo log buffer

Fri, 2021-10-08 01:26
hey tom I read from couple of article which state that when we issue a dml statement and commit, the changes made to the database and undo block is stored into the redo log buffer. what purpose does this undo block do exactly?
Categories: DBA Blogs

Tablespace Offline

Fri, 2021-10-08 01:26
Hey tom i wanted to ask a simple thing, i wanted to know when do we need to recover a tablespace when we alter the tablespace offline temporary. i heard that sometimes we need to recover it especially when there is an offline tablespace when we alter the table space to offline temporary is this true or we dont have to recover a tablespace when we alter the tablespace to offline temporary?
Categories: DBA Blogs

Using the DBMS_STATS-package

Fri, 2021-10-08 01:26
We are doing performance-testing of SQL-statements in our project. When running explain-plans on the statements, the results are often not realistic due to the fact that we have not loaded the tables with data, and therefore the indexes, in most cases, will not be used as they will in production. I found the DBMS_STATS-package and am wondering if I can use this package to virtually load my test-environment when using the explain-plan. Our Oracle-support team doubt that it will be possible to extract the data using GET/EXPORT_TABLE/INDEX/COLUMN_STATS, modifying the data and then using SET/IMPORT_TABLE/INDEX/COLUMN_STATS to put new statistics back into the data-dictionary-tables. Are there any descriptions on the tables the CREATE_STAT_TABLE makes? Regards, Harald
Categories: DBA Blogs

Exception Handling for bulk insert

Fri, 2021-10-08 01:26
Hi Tom, Thank you. Your blogs are really helpful. I have a specific requirement while capture the exception that might occur at the time of bulk insert. What I need to do is when an exception has occurred, while updating the information in the exception table I need to concatenate all the fields from my staging table into a single column as oppose to the have each and every column from staging table into exception table. Below is the example of the stored procedure. So just to make it more clear after the mandatory columns in the exception table I need a column which would have all the fields from AP_ITME table into a single column in error table. Please let me know if this is possible. create or replace procedure AP_ITEM_ADD_CHANGE(ITEM_IN IN AP_ITEM_ARRAY) AS begin FORALL indx in 1 .. ITEM_IN.COUNT insert /*+ APPEND */ into AP_ITEMS ("ITEM_ID","ITEM_NAME","SERIAL","ADDED_ON") values(ITEM_IN(indx).ITEM_ID,ITEM_IN(indx).ITEM_NAME,ITEM_IN(indx).SERIAL,ITEM_IN(indx).ADDED_ON) LOG ERRORS INTO err$_AP_ITEM REJECT LIMIT UNLIMITED; COMMIT; END; create table err$_AP_TIEM (ora_err_number$ number, ora_err_mesg$ varchar2(2000),ora_err_rowid$ rowid, ora_err_optyp$ varchar2(2), ora_err_tag$ varchar2(2000), <How to get the concatenate all the column from AP_ITEM table>);
Categories: DBA Blogs

deq_condition on dbms_aq.dequeue not helping with improving performance in Oracle AQ

Wed, 2021-10-06 12:46
Hi Tom, I have an AQ and the Queue table [QT_WRITE_OFF] has 40,196, 299 records. The task is to do dequeue about 28 million items[which meet a certain criterion] from this queue. Unfortunately, the queue was left to grow too big and I have to find a way of de-queueing. The dbms_aq.dequeue takes way too long to process. The deq_condition we've isn't helping much because the column isn't part of the index on the q_table, so the deq_condition still does a full table scan. May you please kindly advise on an approach/optimization which can improve performance of the dequeue? Can I use a non-payload column in the deq_condition? The script: ******************************************** <code>DECLARE v_dequeueoptions dbms_aq.dequeue_options_t; v_msgprop dbms_aq.message_properties_t; v_msgid raw (16); v_payload ttableid; CURSOR woff_trxns IS SELECT t.rowid, t.msgid, t.user_data.obj_id obj_id, from trxns tut, --normal partitioned table qt_write_off t --q_table where tut.trxn_type = 1 and t.user_data.obj_id =tut.obj_id and tut.gen_status = 'AFAIP'; TYPE c1data IS TABLE OF woff_trxns%ROWTYPE; c1rec c1data; handle_exception EXCEPTION; BEGIN OPEN woff_trxns; LOOP FETCH woff_trxns BULK COLLECT INTO c1rec LIMIT 1000; EXIT WHEN c1rec.COUNT = 0; FOR i IN 1..c1rec.count LOOP v_dequeueoptions.deq_condition := 'tab.user_data.obj_id = ' || c1rec(i).obj_id; v_dequeueoptions.wait := DBMS_AQ.no_wait; dbms_aq.dequeue ( queue_name => 'Q_WRITE_OFF', dequeue_options => v_dequeueoptions, message_properties => v_msgprop, payload => v_payload, msgid => v_msgid ); COMMIT; END LOOP; END LOOP; CLOSE woff_trxns; COMMIT; errPos := 5; EXCEPTION WHEN OTHERS THEN Error_Log.LogErr(v_descr, 'Update', errPos, Error_Log.ERR_LEVEL_ERROR, SQLERRM); END;</code> *************************************************** The queue was created as a normal queue. <code>begin sys.dbms_aqadm.create_queue( queue_name => 'Q_WRITE_OFF', queue_table => 'QT_WRITE_OFF', queue_type => sys.dbms_aqadm.normal_queue, max_retries => 5, retry_delay => 0, retention_time => 0); end; / -- Create table begin sys.dbms_aqadm.create_queue_table( queue_table => 'QT_WRITE_OFF', queue_payload_type => 'TTABLEID', sort_list => 'PRIORITY, ENQ_TIME', compatible => '10.0.0', primary_instance => 0, secondary_instance => 0, storage_clause => 'tablespace VPC_DATA pctfree 10 initrans 1 maxtrans 255 storage ( initial 64K next 1M minextents 1 maxextents unlimited )'); end; / AQ$_QT_WRITE_OFF_I [q_name, local_order_no, state, priority, enq_time, step_no, chain] / CREATE OR REPLACE TYPE "TTABLEID" as object ( obj_id number(16) );</code> Thank you for your assistance. The LiveSQL Link isn't available at the moment.
Categories: DBA Blogs

DBMS_DATAPUMP and DBMS_METADATA packages

Tue, 2021-10-05 18:26
hi tom im new to datapump and i have question about the DBMS_DATAPUMP and DBMS_METADATA and what exactly this 2 package do from what i know package DBMS_DATAPUMP is used to export and import data and metadata And package DBMS_METADATA is used to copy the ddl to create tablespace,table,constrainct,and many more is this true? or i have wrong undestanding about these 2 package thank you
Categories: DBA Blogs

CREATE TABLESPACE documentation doubt

Tue, 2021-10-05 00:06
Oracle Database, SQL Language Reference 19c E96310-11 August 2021 CREATE TABLESPACE documentation (15 SQL Statements, CREATE TABLESPACE) says that: 'Use the storage_clause to specify storage parameters for all objects created in the tablespace. This clause is not valid for a temporary tablespace or a locally managed tablespace. For a dictionary-managed tablespace, you can specify the following storage parameters with this clause: ENCRYPT, INITIAL, NEXT, MINEXTENTS, MAXEXTENTS, MAXSIZE, and PCTINCREASE. Refer to storage_clause for more information.' when storage_clause documentation (8 Common SQL DDL Clauses, storage_clause) says that: 'The specification of storage parameters for objects in locally managed tablespaces is supported for backward compatibility. If you are using locally managed tablespaces, then you can omit these storage parameter when creating objects in those tablespaces.' but later it states that: 'The storage_clause is interpreted differently for locally managed tablespaces. For locally managed tablespaces, Oracle Database uses INITIAL, NEXT, PCTINCREASE, and MINEXTENTS to compute how many extents are allocated when the object is first created. After object creation, these parameters are ignored. For more information, see CREATE TABLESPACE.' ----- I'm little confused here. Could you clarify it? Now it looks like CREATE TABLESPACE points to storage_clause for more information and storage_clause points back to CREATE TABLESPACE for more information? Moreover CREATE TABLESPACE says that in locally managed tablespaces it it unsupported when storage_clause says something different.
Categories: DBA Blogs

What would be an appropriate approach to host a C file outside DB server

Tue, 2021-10-05 00:06
Hello AskTom team, Can you please help with an architectural issue that we are facing as a result of movement to Exadata? Pasted below is the scenario and our question. Thanks for all the good work you are doing. Thanks, ------------------------------- Scenario: We call a C library from Oracle DB using external procedure call. This library (client) talks with a daemon (server) running on the DB server. Calls to the C library complete quickly (in micro seconds) and the library will be heavily (millions of times) invoked from SQL / PL/SQL, in a row by row fashion. Due to some restrictions, we can not move the logic of C code inside DB. This works fine in our standalone DB servers. However, we now have to move to Exadata and our Exadata team does not allow to keep any "non-Oracle" file on the servers. This includes the C library as well as the server C code. We are trying to figure out what would be an apt location to move the library and the server code. We have considered multiple options like: 1) Create a RESTful service and invoke it instead of the C code, 2) Host the C code on a non-ExaCC DB server and call it via DB Link. However, as soon as we move the C code from the DB server, microseconds turns in to milliseconds, primarily because of communication over network. Any idea, what other options we can explore ?
Categories: DBA Blogs

DBMS_SCHEDULER execute bat file

Mon, 2021-10-04 05:46
HI, I created a bat file to run sqlldr to load the data from a csv file into Oracle,and I want to execute it automatically ,so I tried it by DBMS_SCHEDULER exec DBMS_SCHEDULER.run_job('DAILY_LOAD'); Rapport d'erreur - ORA-27369: echec du travail de type EXECUTABLE avec le code sortie : No such file or directory ORA-06512: a "SYS.DBMS_ISCHED", ligne 196 ORA-06512: a "SYS.DBMS_SCHEDULER", ligne 486 ORA-06512: a ligne 1 dbms_scheduler.create_job ( job_name=>'DAILY_LOAD', job_type => 'EXECUTABLE', job_action => 'c:\Windows\System32\cmd.exe', number_of_arguments => 2, start_date => SYSTIMESTAMP, repeat_interval => 'freq=hourly; byminute=0', end_date => NULL, enabled => false, comments => 'TEST' ); dbms_scheduler.set_job_argument_value ( job_name =>'DAILY_LOAD', argument_position=>1, argument_value=> '/c' ); dbms_scheduler.set_job_argument_value ( job_name =>'DAILY_LOAD', argument_position=>2, argument_value=> 'C:\orant10\BIN\loadDB.bat' ); DBMS_SCHEDULER.enable( name => 'DAILY_LOAD');
Categories: DBA Blogs

NUMBER Datatype

Thu, 2021-09-30 10:26
Hello 'Tom' This question is regarding the NUMBER data type in an Oracle table. Or in PL/SQL and so on. I used to define a number-column always like NUMBER(10) or NUMBER(12,2) or NUMBER (1). Nowadays I see more and more that the columns are defined as only NUMBER. The use of length and/or precision is skipped. My question is: What should be the best practise? Go further with NUMBER data type or try to make sense out of the definition of the column and define like NUMBER (4) or NUMBER (10) or NUMBER (20,3) etc. Obviously if I want to add an implicit constraint to the column, the NUMBER(4) is a logical use. Within the answer how and why to use the NUMBER data type, also PGA - usage / performance / cursors / collection's / index- and tablesize's etc should be taken into account. Thanks in advance Greetings Wouter
Categories: DBA Blogs

How to improve timings of query executed on a DB link?

Thu, 2021-09-30 10:26
Hello Team, First of all, thanks for all the good work you are doing. It would be great if you can help us understand how we can improve timings of query executed on a DB link. Consider an Oracle instance (DB1) that can talk with another oracle instance (DB2) via DB Link. A select statements fetches ~1,00,000 values from DB1 and pass them to a function that is defined on DB2. The function do some operations on the values and returns the results. All the data is in DB1 but the function has to be in DB2. We have tried multiple things and using DRIVING_SITE hint helped to reduce timings. However, it comes with its own issues( Code re-write, hint can be ignored, issue in techniques used in plan stabilization, query transformation, non-applicability to DML ). Things like switching to bulk processing did not help much. Any thing else that we can try? We are looking for techniques that can be applied to a large number of statements. Thanks, AB
Categories: DBA Blogs

12c backup recovery questions

Thu, 2021-09-30 10:26
I want to verify that for below question my answer is correct please kindly help me. Your database instance is abnormally terminated because of a power outage. At the next startup, from which point in the redo log does the recovery start? A. from the last complete checkpoint position B. from the beginning of the current redo log file until the instance failure C. from the last committed transaction D. from the beginning of the current redo log file to the checkpoint position E. from the most recent incremental checkpoint I think answer should be B please verify. Which three requirements should be successfully met by an Oracle Secure Backup (OSB) user so that OSB performs RMAN backup or restore requests? (Choose three.) A. RMAN preauthorization on the host B. OSB encryption for data in transport and on tape C. matching the OS user identity of the Oracle instance associated with the database username D. assigned to a class with rights to back up or restore Oracle database E. scheduling of the RMAN backup to occur automatically at user-defined intervals F. assigned to a class with rights to browse all directories and catalogs I think ADF Please help me.
Categories: DBA Blogs

12c space thresholds

Thu, 2021-09-30 10:26
You specified the warning and critical thresholds for a locally managed tablespace to be 60% and 70%, respectively. From the tablespace space usage metrics, you find that the space usage has reached the specified warning threshold value, but no alerts have been generated. What could be the reason for this? A. The event parameter was not set. B. The sql_trace parameter is set to false. C. Enterprise Manager was not used. D. The statistics_level parameter is set to basic. E. The timed_statistics parameter is set to false. I think answer should be A please help.
Categories: DBA Blogs

Pages