Thursday, July 24, 2014

Oracle 11 DataPump, and connecting to a shutdown database.

To use DataPump to Export and Import a Oracle database. 

I used to use the 'imp' and 'exp' command. These commands are no longer, by default, available from Oracle version 10g and above. So it was time to start using the EXPDP and IMPDP commands.


Step 1 - Create a Oracle Directory

First I want to do a Export of my database. I use a Windows Server 2012, Maximo, with Oracle 11gR2 database.

When using the exp command I could specify a directory where the dump file should be stored. The EXPDP command uses a Oracle parameter called 'directory'. This parameter specifies where to store the dump file. In my example I use a local directory placed on the server. In a production environment you could create a shared folder on the server so you can manage your dumps from an administrative workstation.

I created the following directory by executing the script in SQL Developer as  SYS user;




In this script we grant the system user the permissions on the directory. You should always use the system user to do your exporting and importing. 

Now we have a directory. Oracle also comes with a default Datapump directory. If you want to list all the configured directories, you can query the ALL_DIRECTORIES view.


On my test server this will give the following result:

Step 2 - Export

Now we are ready for the Export. I created a export.cmd file in the same directory as the output from the directory. 

The content of the cmd:

expdp system/sys schemas=MAXIMO directory=DIR01 dumpfile=ctginst1.dmp logfile=Export.log reuse_dumpfiles=yes

Here we see the system account with password sys, the schema name we want to export, the directory where the export should be stored, the name of the dump file, the name of the logfile and the option to overwrite the dumpfile.

In order to run the Export I open a dosbox as Administrator and navigate to the location of the export.cmd.

Then I type in export.cmd and press enter. The export start to run.

After a short while I got an error message. Might not be occuring at your configuration but I got the error shown below: 

This is the error:

ORA-04301 error
ORA-39097: Data Pump job encountered unexpected error -4301

I managed to solve it with information from this page:

In short, a Oracle memory problem.

To fix it, connect to the databse as sys and run:

show parameter sga_t;
show parameter sga_max;

If they are low (below 300M I gues)  you can in crease these by running:

alter system set sga_max_size=300M scope=spfile;
alter system set sga_target=300M scope=spfile;

As setting of SGA_TARGET is depend on SGA_MAX_SIZE and to change SGA_MAX_SIZE we need to bounce the database (restart) in order to effect.
I use SQL Plus to login to the database as sys and then execute:

startup force;  (see below for an update on the startup force, this will most likely not work, you have to follow the steps below)

Now I run the export again, and huuray! it works

Update 11 september 2014
After working with this configuration for a while the problem came back! I Checked the parameters, but they still where at 300M.
First I stopped the MXServer and then I bounced the database. While bouncing the database with the startup force; the database wouldn't startup anymore and I couldn't connect. I got the error below:

ORA-12514: TNS:listener does not currently know of service requested in connect
SQL> startup mount
ORA-24324: service handle not initialized

ORA-01041: internal error. hostdef extension doesn't exist

I decided to reboot the server. After the reboot, the database was back online, so I decided to try the export again.
This time the export was succesfull. So no changes made, just a reboot...

To prevent the error with a startup force; I used the information from this site:

So it seems that I first have to shutdown the database and then use startup force; 
The test:

I login to SQL Plus as sys, then execute:
shutdown immediate;

Then I close SQL Plus and start it up again. But when I try to login I get a ORA-12514 again.
This seems normal, because the database is down, but is there a way to bring it back to life with a startup command then?

With Google I found this:

assuming that after you shut down the database, you want to log in to the database through the listener
by default the database uses dynamic registration and is available through the listener only in MOUNT or OPEN state
to be able to "log in" to a not running, or a NOMOUNT database (as SYSDBA), you have to register the database instance manually in your listener.ora

Ok, makes sense, let's do that.
I go to my 'listener.ora' located on my machine  in 'E:\app\Administrator\product\11.2.0\dbhome_1\NETWORK\ADMIN' and i added the SID for my database to the SID_LIST_LISTENER.


      (ORACLE_HOME = E:\app\Administrator\product\11.2.0\dbhome_1)


Next I restarted the TNS Listener Service

Now I opened a SQL Plus and logged in as sysdba:

Next I execute startup force; 

and whoohooo :) succes!

I also encountered a ORA-00333 error at one point when trying to startup the database on my VM. I came across this great post that fixed it: 

Step 3 - Import

So now we have a backup, lets see if it is a good backup by dropping the Maximo database and restoring it with the created dump from the previous step.

First, drop the database by connecting to the database as sys and execute the following command (Dropping the Maximo user and recreating it is standard IBM procedure, so I will not post any screenshots etc. For more information about this check:

Now we have an empty Maximo schema in our database.

Next I created import.cmd in the same directory as the dump file and put the following line in it:

impdp system/sys@ctginst1 directory=DIR01 dumpfile=CTGINST1.DMP logfile=import.log

If you want to import to another schema (for example you load a dump from a production environment to a Test environment you need to use the remap_schema=MAXIMO:MAXTEMP parameter in your import.cmd.
So if the production is MAXIMO and the target environment is MAXTEMP the complete cmd file would be:
impdp system/sys@maxtemp directory=DIR01 dumpfile=CTGINST1.DMP remap_schema=MAXIMO:MAXTEMP logfile=importCTGINST1ToMAXTEMP.log

Again I run it from a dosbox and surprise surprise, a error pops-up... :(

ORA-39083: object type VIEW failed to create with error:
ORA-00928: missing SELECT keyword

Again Google was my friend, and I found this page with a solution:

According to the site the issue only occurs on databases of version with multibyte character set.

So in order to fix it you could install patch for the Oracel Database, or, like I did, use this workaround;

Connect to the database as sys and execute the following:

alter system set events '31156 trace name context forever, level 0x400';

Now I try the import again:

And ......   done. :) 
I start the websphere server again to check if Maximo works again:

I created extra database for this exercise, but to save resources I set the Oracle Service for that database to Manual so it won't start when rebooting. I want to manually start it when I need it.

But now when I do the DataPump the following error occurs:

UDE-12560: operation generated ORACLE error 12560
ORA-12560: TNS:protocol adapter error

To solve this I need to specify the SID. To do this you simply start a dosbox and type:
set ORACLE_SID=orcl

Even better is to put it in the Export.cmd.
So the complete export.cmd will be:

set ORACLE_SID=ctginst1

expdp system/sys schemas=MAXIMO directory=DIR01 dumpfile=ctginst1.dmp logfile=Export.log reuse_dumpfiles=yes

Now I don't know if it occurs because I stopped the one of the Oracle Services or if I now have two SIDs. ut no matte what, this will fix it :)


No comments:

Post a Comment