Table Recovery – How We Rescued Dropped Tables in 19.23 Release

 Release 19.23 was cruising smoothly—until we hit a pothole.

In a moment of chaos during a deployment cycle, two critical tables in our Oracle database—ETL_COMMONRACE_TBD and ETL_COMMONRACE_HIST_TBD—were accidentally dropped. What followed was a war room scenario involving RMAN, SCNs, and an unexpected twist involving pluggable database parameters.

Here's how we restored order from disorder.


The Problem

Two essential tables went missing from REGIY2K_DBO. Attempts to recover them using RMAN's RECOVER TABLE command were met with repeated failures.

The error logs screamed:


RMAN-03002: failure of recover command ORA-03114: not connected to ORACLE RMAN-03009: failure of sql command on clone_default channel ORA-01092: ORACLE instance terminated. Disconnection forced

Diagnosis

After some deep digging, we realized that the problem lay in the default behavior of Oracle attempting to open a pluggable database (PDB) when setting up the auxiliary instance during the table recovery.

But our target environment was non-CDB (not using pluggable databases).

The solution lay in the overlooked—but powerful—initaux.ora configuration.


The Fix: Disabling Pluggable DBs

We created an auxiliary parameter file with a critical line:

enable_pluggable_database=FALSE

This setting prevented RMAN from treating the auxiliary as a PDB environment, which avoided the ORA-01092 instance termination error.


🔁 The Recovery Command

Here’s the RMAN recovery script that finally worked:

nohup rman target / @restore_table.rmn log=restore_tables.log &

run { SET AUXILIARY INSTANCE PARAMETER FILE TO '/home/oracle/initaux.ora' RECOVER TABLE REGIY2K_DBO.ETL_COMMONRACE_HIST_TBD, REGIY2K_DBO.ETL_COMMONRACE_TBD UNTIL SCN 8311804385864 AUXILIARY DESTINATION '/restoredb/restore/' datapump destination '/restoredb/restore/dpump/' dump file 'export.dmp' notableimport KEEP AUXILIARY; }

And the contents of initaux.ora:


sga_target=20G sga_max_size=20G enable_pluggable_database=FALSE

With that in place, we were able to:

  • Avoided catalog connection 

  • Get past the ORA-03113 and ORA-03114 disconnection errors

  • Successfully restore the dropped tables to their original state


 The Outcome

Once restored, the tables were validated against backups, and a Data Pump export confirmed integrity. The system was back online with zero data loss and minimal downtime.


💡 Lessons Learned

  • Always double-check your DROP TABLE statements, especially in live environments.

  • In Oracle 19c+, PDB-related flags can cause unexpected issues, even if you don’t use pluggable databases.

  • Custom parameter files like initaux.ora give you full control over recovery context.



Debugging EBS 12.2 SSL Setup

 Hi, 


Here is article related the setup of TLS on Our EBS 12.2. We have F5 load balancer, it terminates SSL. ie. User to F5 encrypted with TLS and F5 to EBS communication is non-TLS. After following the setup of context variables, all looked good except, forms login providing popups. 

"The Certificate is not valid and cannot be used to verify the identity of this website". We know that we have created new cert and associated with Load Balancer. No other clue on logs of EBS. 



We enabled tracing by setting runtime parameters.

-Djava.security.debug=all -Djavax.net.debug=all




Java Console errors.

Caused by: java.net.ConnectException: Connection refused: connect

        at java.net.DualStackPlainSocketImpl.connect0(Native Method)

        at java.net.DualStackPlainSocketImpl.socketConnect(Unknown Source)

        at java.net.AbstractPlainSocketImpl.doConnect(Unknown Source)

        at java.net.AbstractPlainSocketImpl.connectToAddress(Unknown Source)

        at java.net.AbstractPlainSocketImpl.connect(Unknown Source)

        at java.net.PlainSocketImpl.connect(Unknown Source)

        at java.net.SocksSocketImpl.connect(Unknown Source)

        at java.net.Socket.connect(Unknown Source)

        at java.net.Socket.connect(Unknown Source)

        at java.net.Socket.<init>(Unknown Source)

        at java.net.Socket.<init>(Unknown Source)

        at com.sun.jndi.ldap.Connection.createSocket(Unknown Source)

        ... 74 more

certpath: Can't create URICertStore: unable to create InitialDirContext using supplied parameters

certpath: Trying to fetch CRL from DP http://pki.example.com/pki/DEV%20Staff%20Issuing%20CA.crl

certpath: CertStore URI:http://pki.example.com/pki/DEV%20Staff%20Issuing%20CA.crl

certpath: Downloading new CRL...

certpath: X509CRLSelector.match: update out-of-range

certpath: Returning 0 CRLs

Allow unsafe renegotiation: false

Allow legacy hello messages: true



On the Java Console, I have collected the logs and reviewed. it is definitely to do something with CRLs. I have contacted Security team. It took a while to figure out the cause. Finally it was expired CRL (Certificate Revocation List) file for Dev Staff Issuing CA causing the problem. The updated CRL has been published to the pki.example.com web server and CRL validations with the full chain are working now.


Hooray!!! Happy Debugging!!