Open two connections like "HR" and "HRNEW" -> Database -> Compare -> Schemas -> Define Source and Target Schemas -> Select Object Types in "Object Types to Compare" -> Check and Set "Stop when # of differences reaches" option to more than "1000" like "10000" in "Misc Options" tab -> Click "Run" image -> Check script in "Sync Script" -> Lastly click "Move Script to Editor and Run Now" button -> That's all :)
Tuesday, 13 December 2016
Wednesday, 30 November 2016
How to export/import schema/table with Oracle Data Pump?
Define new user:
SQL> CREATE USER NEWHR
IDENTIFIED BY newhr
DEFAULT TABLESPACE USERS
TEMPORARY TABLESPACE TEMP
PROFILE DEFAULT
ACCOUNT UNLOCK;
SQL>
GRANT CREATE SESSION TO NEWHR;
GRANT ALTER SESSION TO NEWHR;
GRANT CONNECT TO NEWHR;
GRANT RESOURCE TO NEWHR;
GRANT CREATE DATABASE LINK TO NEWHR;
GRANT CREATE SEQUENCE TO NEWHR;
GRANT CREATE SESSION TO NEWHR;
GRANT CREATE SYNONYM TO NEWHR;
GRANT CREATE VIEW TO NEWHR;
GRANT CREATE TYPE TO NEWHR;
GRANT CREATE TABLE TO NEWHR;
GRANT CREATE TRIGGER TO NEWHR;
Check current Oracle directories
SQL> select * from dba_directories;
Define new/replace new Oracle Directory
SQL> create or replace directory export_dir AS 'C:\Alper_ORACLE';
Give grants to user
SQL> grant read, write on directory export_dir to hr;
Export the schema
expdp hr/hr schemas=hr directory=export_dir dumpfile=hr.dmp logfile=hr.log
Export tables
expdp hr/hr tables=employees,employees_new directory=export_dir dumpfile=hr_table.dmp logfile=hr.log
Give grants to new user
SQL> grant read, write on directory export_dir to newhr;
Import schema to new machine
impdp newhr/newhr schemas=hr remap_schema=hr:newhr directory=export_dir dumpfile=hr.dmp logfile=hr.log
Import tables to new machine
impdp newhr/newhr tables=employees,employees_new remap_schema=hr:newhr directory=export_dir dumpfile=hr_table.dmp logfile=hr_table.log
If local storage is not enough to locate export file, use database link
e.g.
HR located in A machine
NEWHR located in B machine
"XE" is A's machine DB link connection
On B machine:
SQL> grant create database link to newhr;
SQL> create database link old_hr connect to hr identified by "HR" using 'XE';
To check db link;
SQL> select * from v$instance@XE;
impdp newhr/newhr DIRECTORY=export_dir NETWORK_LINK=old_hr remap_schema=hr:newhr logfile=newhr.log
SQL> CREATE USER NEWHR
IDENTIFIED BY newhr
DEFAULT TABLESPACE USERS
TEMPORARY TABLESPACE TEMP
PROFILE DEFAULT
ACCOUNT UNLOCK;
SQL>
GRANT CREATE SESSION TO NEWHR;
GRANT ALTER SESSION TO NEWHR;
GRANT CONNECT TO NEWHR;
GRANT RESOURCE TO NEWHR;
GRANT CREATE DATABASE LINK TO NEWHR;
GRANT CREATE SEQUENCE TO NEWHR;
GRANT CREATE SESSION TO NEWHR;
GRANT CREATE SYNONYM TO NEWHR;
GRANT CREATE VIEW TO NEWHR;
GRANT CREATE TYPE TO NEWHR;
GRANT CREATE TABLE TO NEWHR;
GRANT CREATE TRIGGER TO NEWHR;
Check current Oracle directories
SQL> select * from dba_directories;
Define new/replace new Oracle Directory
SQL> create or replace directory export_dir AS 'C:\Alper_ORACLE';
Give grants to user
SQL> grant read, write on directory export_dir to hr;
Export the schema
expdp hr/hr schemas=hr directory=export_dir dumpfile=hr.dmp logfile=hr.log
Export tables
expdp hr/hr tables=employees,employees_new directory=export_dir dumpfile=hr_table.dmp logfile=hr.log
Give grants to new user
SQL> grant read, write on directory export_dir to newhr;
Import schema to new machine
impdp newhr/newhr schemas=hr remap_schema=hr:newhr directory=export_dir dumpfile=hr.dmp logfile=hr.log
Import tables to new machine
impdp newhr/newhr tables=employees,employees_new remap_schema=hr:newhr directory=export_dir dumpfile=hr_table.dmp logfile=hr_table.log
If local storage is not enough to locate export file, use database link
e.g.
HR located in A machine
NEWHR located in B machine
"XE" is A's machine DB link connection
On B machine:
SQL> grant create database link to newhr;
SQL> create database link old_hr connect to hr identified by "HR" using 'XE';
To check db link;
SQL> select * from v$instance@XE;
impdp newhr/newhr DIRECTORY=export_dir NETWORK_LINK=old_hr remap_schema=hr:newhr logfile=newhr.log
Labels:
data pump,
oracle,
schema export,
schema import,
table export,
table import
What are database startup options?
SQL> STARTUP
--> starts normally
SQL> STARTUP nomount
--> starts as nomount mode
SQL> STARTUP restrict
--> start as restrict mode
SQL> ALTER SYSTEM DISABLE RESTRICTED SESSION
--> unlock restrict mode
SQL> STARTUP force
--> tries to ignore startup issues
SQL> STARTUP open recover
--> starts as recover mode
SQL> ALTER DATABASE OPEN READ ONLY
--> starts as read only mode
SQL> ALTER DATABASE OPEN READ WRITE
--> starts unrestricted mode
In extra:
SQL> ALTER SYSTEM ENABLE RESTRICTED SESSION
--> enable restrict mode
--> starts normally
SQL> STARTUP nomount
--> starts as nomount mode
SQL> STARTUP restrict
--> start as restrict mode
SQL> ALTER SYSTEM DISABLE RESTRICTED SESSION
--> unlock restrict mode
SQL> STARTUP force
--> tries to ignore startup issues
SQL> STARTUP open recover
--> starts as recover mode
SQL> ALTER DATABASE OPEN READ ONLY
--> starts as read only mode
SQL> ALTER DATABASE OPEN READ WRITE
--> starts unrestricted mode
In extra:
SQL> ALTER SYSTEM ENABLE RESTRICTED SESSION
--> enable restrict mode
Tuesday, 29 November 2016
How to enable debug mode?
SQL> GRANT DEBUG CONNECT SESSION TO hr;
SQL> GRANT DEBUG ANY PROCEDURE TO hr;
Lastly, reconnect to the session and try again :)
SQL> GRANT DEBUG ANY PROCEDURE TO hr;
Lastly, reconnect to the session and try again :)
Wednesday, 23 November 2016
PLS-00201 Solution
Error
PLS-00201: identifier 'TABLE_ALPER' must be declared
Solution
Define public synonym for identifier
e.g.
SQL>
CREATE PUBLIC SYNONYM TABLE_ALPER
FOR HR.TABLE_ALPER;
PLS-00201: identifier 'TABLE_ALPER' must be declared
Solution
Define public synonym for identifier
e.g.
SQL>
CREATE PUBLIC SYNONYM TABLE_ALPER
FOR HR.TABLE_ALPER;
How to truncate a folder in Outlook?
Right mouse click on desired folder -> Select "Delete All" -> Click "Yes" -> Done! :)
Labels:
outlook
Wednesday, 16 November 2016
ORA-01578 solution (if object is INDEX)
Error
ORA-01578: ORACLE data block corrupted (file # 100, block #
50000)
Solution
Find the object that was corrupted:
SQL>
SELECT SEGMENT_TYPE, OWNER || '.' || SEGMENT_NAME
FROM DBA_EXTENTS
WHERE FILE_ID = 100 AND 50000 BETWEEN BLOCK_ID AND BLOCK_ID + BLOCKS - 1;
FROM DBA_EXTENTS
WHERE FILE_ID = 100 AND 50000 BETWEEN BLOCK_ID AND BLOCK_ID + BLOCKS - 1;
Then re-create index or indexes:
SQL> drop index alper_index_pk;
SQL>
CREATE UNIQUE INDEX HR.ALPER_INDEX_PK ON HR.EMPLOYEES
(EMPLOYEE_ID)
LOGGING
TABLESPACE INDEX_TS
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
NEXT 1M
MAXSIZE UNLIMITED
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
)
NOPARALLEL;
Thursday, 10 November 2016
How to get AWR report as html file?
Generate Snapshots
SQL> EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;
-> Run this SQL as twice (at the begin and at the end)
Get Automatic Workload Repository (AWR) as HTML format
Login -> Targets -> Databases -> Click Database -> Performance -> AWR -> AWR Report -> Select By Snapshot -> Click Generate Report -> Click Save to File -> Done :)
SQL> EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;
-> Run this SQL as twice (at the begin and at the end)
Get Automatic Workload Repository (AWR) as HTML format
Login -> Targets -> Databases -> Click Database -> Performance -> AWR -> AWR Report -> Select By Snapshot -> Click Generate Report -> Click Save to File -> Done :)
How to add target manually in 12c EM ?
Prepare the 12c EM server
# vi /etc/hosts
10.10.10.10 alperdb.com
-> remote machine (agent) IP and hostname
Prepare the environment (at new host)
# useradd grid12 -g dba
# vi /etc/hosts
10.10.10.2 grid12.com
-> Current grid12 server machine IP and hostname
# vi /etc/sudoers
grid12 ALL=NOPASSWD: ALL
grid12 ALL=(ALL) ALL
# vi /etc/oraInst.loc
inventory_loc=/home/grid12/oraInventory
inst_group=dba
# passwd grid12
-> password: grid12
# sudo su - grid12
$ mkdir /home/grid12/oraInventory
Adding Host
Login -> Setup from top-right corner -> Add Target -> Add Target Manually -> Add Host -> Installation Base Directory: /home/grid12/agent -> Instance Directory field will be filled automatically ->
Configure Agent
Login -> Setup from top-right corner -> Add Target -> Add Target Manually -> Select Add Targets Declaratively by Specifying Target Monitoring Properties -> Target Type: Database Instance, Monitoring Agent: 10.10.10.10:3872 -> Target Name: ALPERDB, Database System: ALPERDB, Oracle Home Path: echo $ORACLE_HOME, Port: 1521 -> Click Test Connection -> Submit -> Done! :)
# vi /etc/hosts
10.10.10.10 alperdb.com
-> remote machine (agent) IP and hostname
Prepare the environment (at new host)
# useradd grid12 -g dba
# vi /etc/hosts
10.10.10.2 grid12.com
-> Current grid12 server machine IP and hostname
# vi /etc/sudoers
grid12 ALL=NOPASSWD: ALL
grid12 ALL=(ALL) ALL
# vi /etc/oraInst.loc
inventory_loc=/home/grid12/oraInventory
inst_group=dba
# passwd grid12
-> password: grid12
# sudo su - grid12
$ mkdir /home/grid12/oraInventory
Adding Host
Login -> Setup from top-right corner -> Add Target -> Add Target Manually -> Add Host -> Installation Base Directory: /home/grid12/agent -> Instance Directory field will be filled automatically ->
Configure Agent
Login -> Setup from top-right corner -> Add Target -> Add Target Manually -> Select Add Targets Declaratively by Specifying Target Monitoring Properties -> Target Type: Database Instance, Monitoring Agent: 10.10.10.10:3872 -> Target Name: ALPERDB, Database System: ALPERDB, Oracle Home Path: echo $ORACLE_HOME, Port: 1521 -> Click Test Connection -> Submit -> Done! :)
Wednesday, 9 November 2016
ORA-27102 solution
Error
SQL> startup;
ORA-27102: out of memory
Linux-x86_64 Error: 28: No space left on device
Solution
$ vi /etc/sysctl.conf
edit kernel.shmall, kernel.shmmax, kernel.shmmni parameter
e.g.
$ getconf PAGE_SIZE
$ cat /proc/sys/kernel/shmall
(4096) * (524288) = 2147483648 (2 GB)
kernel.shmall = 2147483648
kernel.shmmax = 21474836480
kernel.shmmni = 4096
# sysctl -p
apply the settings
Alternative link: http://ozsoyler.blogspot.com/2014/10/how-to-set-shmmax-value-and-what-is-that.html
SQL> startup;
ORA-27102: out of memory
Linux-x86_64 Error: 28: No space left on device
Solution
$ vi /etc/sysctl.conf
edit kernel.shmall, kernel.shmmax, kernel.shmmni parameter
e.g.
$ getconf PAGE_SIZE
$ cat /proc/sys/kernel/shmall
(4096) * (524288) = 2147483648 (2 GB)
kernel.shmall = 2147483648
kernel.shmmax = 21474836480
kernel.shmmni = 4096
# sysctl -p
apply the settings
Alternative link: http://ozsoyler.blogspot.com/2014/10/how-to-set-shmmax-value-and-what-is-that.html
Monday, 7 November 2016
How to import old Toad profile for new one?
- Export old User Files from %USERPROFILE%\AppData\Roaming\Quest Software
- Import them to %USERPROFILE%\AppData\Roaming\Quest Software under User Files folder
Labels:
toad
Wednesday, 2 November 2016
How to Clone Database Manually without any tool such as RMAN?
Collect live database paths before start
$ ps -ef | grep pmon
$ echo $ORACLE_SID
$ echo $ORACLE_BASE
$ echo $ORACLE_HOME
$ . oraenv
SQL> select name from v$datafile; --db files
SQL> select member from v$logfile; --redo log files
SQL> select name from v$controlfile; --control files
SQL> select name from v$tempfile; --temp files
Copy parameter file (pfile) from live database
SQL> create pfile='/tmp/Alper-init.ora' from spfile;
or
* Copy livedb pfile under $ORACLE_HOME/dbs directory as desired file name like 'Alper-init.ora'
Edit pfile
* Change related parameter such as db_name and control files path (Alper-init.ora)
Copy control file from live database
SQL> alter database backup controlfile to trace as '/tmp/Alper-ctl';
or
SQL> alter database backup controlfile to trace;
-- Run SQL> show parameter user_dump_dest; to find it
Edit control file
* Prepare a new script with this script
SQL> CREATE CONTROLFILE REUSE SET DATABASE "ALPERDB" RESETLOGS NOARCHIVELOG
-- Use this if you want to re-use livedb controlfiles in new clone db (if you already moved the controlfiles to new clone db)
or
SQL> CREATE CONTROLFILE SET DATABASE "ALPERDB" RESETLOGS NOARCHIVELOG
-- Use this if you want to use fresh control file in new clone db
-- SQL> show parameter control; -- to find control file
Copy all database files to remote machine
$ scp -r /oradata/ALPERDB/ oracle@10.10.10.10:/oradata/NEW_ALPERDB
Open database
SQL> alter database open resetlogs;
Define temp tablespace for new database
SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/oradata/NEW_ALPERDB/temp_01.dbf'
SIZE 10240M REUSE AUTOEXTEND OFF;
For detail information please click this
That's all :)
$ ps -ef | grep pmon
$ echo $ORACLE_SID
$ echo $ORACLE_BASE
$ echo $ORACLE_HOME
$ . oraenv
SQL> select name from v$datafile; --db files
SQL> select member from v$logfile; --redo log files
SQL> select name from v$controlfile; --control files
SQL> select name from v$tempfile; --temp files
Copy parameter file (pfile) from live database
SQL> create pfile='/tmp/Alper-init.ora' from spfile;
or
* Copy livedb pfile under $ORACLE_HOME/dbs directory as desired file name like 'Alper-init.ora'
Edit pfile
* Change related parameter such as db_name and control files path (Alper-init.ora)
Copy control file from live database
SQL> alter database backup controlfile to trace as '/tmp/Alper-ctl';
or
SQL> alter database backup controlfile to trace;
-- Run SQL> show parameter user_dump_dest; to find it
Edit control file
* Prepare a new script with this script
SQL> CREATE CONTROLFILE REUSE SET DATABASE "ALPERDB" RESETLOGS NOARCHIVELOG
-- Use this if you want to re-use livedb controlfiles in new clone db (if you already moved the controlfiles to new clone db)
or
SQL> CREATE CONTROLFILE SET DATABASE "ALPERDB" RESETLOGS NOARCHIVELOG
-- Use this if you want to use fresh control file in new clone db
-- SQL> show parameter control; -- to find control file
Copy all database files to remote machine
$ scp -r /oradata/ALPERDB/ oracle@10.10.10.10:/oradata/NEW_ALPERDB
Open database
SQL> alter database open resetlogs;
Define temp tablespace for new database
SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/oradata/NEW_ALPERDB/temp_01.dbf'
SIZE 10240M REUSE AUTOEXTEND OFF;
For detail information please click this
That's all :)
Tuesday, 1 November 2016
How to configure listener port when it changes (like 1522) ?
After listener.ora and tnsnames.ora file changes, only run this command
SQL> ALTER SYSTEM SET LOCAL_LISTENER='(ADDRESS = (PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1522))' SCOPE=BOTH;
Check current setting
SQL> SHOW PARAMETER local_listener;
Check listener
lsnrctl status
That's all.
SQL> ALTER SYSTEM SET LOCAL_LISTENER='(ADDRESS = (PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1522))' SCOPE=BOTH;
Check current setting
SQL> SHOW PARAMETER local_listener;
Check listener
lsnrctl status
That's all.
Thursday, 27 October 2016
How to configure instant client for Toad?
-> unrar .zip file
e.g. C:\oracle_client\12.1.0\client_1
-> add this to "Environment Variables->Path"
e.g. C:\oracle_client\12.1.0\client_1
-> add this to "Environment Variables->Path"
That's all :)
Labels:
oracle,
oracle client,
toad
How see latest sql histories from audit view?
Check audit paramater
SQL> show parameter audit;
-- basic options; 'DB','DB,EXTENDED', 'OS', 'NONE'
Enable the paramater permanently
add this into initora file;
*.audit_trail='DB'
Flush the aud table
TRUNCATE TABLE sys.aud$
Backup related table
CREATE TABLE ALPER_AUD
AS
SELECT * FROM dba_audit_trail
Examine the sessions
SELECT * FROM ALPER_AUD
SQL> show parameter audit;
-- basic options; 'DB','DB,EXTENDED', 'OS', 'NONE'
Enable the paramater permanently
add this into initora file;
*.audit_trail='DB'
Flush the aud table
TRUNCATE TABLE sys.aud$
Backup related table
CREATE TABLE ALPER_AUD
AS
SELECT * FROM dba_audit_trail
Examine the sessions
SELECT * FROM ALPER_AUD
Wednesday, 26 October 2016
How to define snapshot and synonym?
Define source table at DB1 machine
CREATE TABLE t_alper_source
AS
SELECT * FROM employees;
ALTER TABLE t_alper_source ADD CONSTRAINT pk_alper_source PRIMARY KEY (employee_id);
CREATE SNAPSHOT LOG ON t_alper_source;
Define snapshot at DB2 machine
CREATE SNAPSHOT "SS_ALPER_SOURCE"
TABLESPACE "USERS"
BUILD IMMEDIATE
USING INDEX TABLESPACE "USER_INDEX"
REFRESH FAST
AS
SELECT * FROM T_ALPER_SOURCE@DB1;
-- There should be two objects that are "Table" and "Materialized View" after executes.
Define synonym at DB2 machine
CREATE PUBLIC SYNONYM ALPER_TABLE FOR SS_ALPER_SOURCE;
Dynamic SQL for snapshots
SQL>
select 'create snapshot '||owner||'.ss_new_employees as select * from '||owner||'.new_employees@'||owner||'db;
create synonym '||owner||'.new_employees for '||owner||'.ss_new_employees;' from dba_snapshots where name='SS_EMPLOYEES';
CREATE TABLE t_alper_source
AS
SELECT * FROM employees;
ALTER TABLE t_alper_source ADD CONSTRAINT pk_alper_source PRIMARY KEY (employee_id);
CREATE SNAPSHOT LOG ON t_alper_source;
Define snapshot at DB2 machine
CREATE SNAPSHOT "SS_ALPER_SOURCE"
TABLESPACE "USERS"
BUILD IMMEDIATE
USING INDEX TABLESPACE "USER_INDEX"
REFRESH FAST
AS
SELECT * FROM T_ALPER_SOURCE@DB1;
-- There should be two objects that are "Table" and "Materialized View" after executes.
Define synonym at DB2 machine
CREATE PUBLIC SYNONYM ALPER_TABLE FOR SS_ALPER_SOURCE;
Dynamic SQL for snapshots
SQL>
select 'create snapshot '||owner||'.ss_new_employees as select * from '||owner||'.new_employees@'||owner||'db;
create synonym '||owner||'.new_employees for '||owner||'.ss_new_employees;' from dba_snapshots where name='SS_EMPLOYEES';
Tuesday, 25 October 2016
What are the snapshot (Materialized Views) refresh methods?
begin
DBMS_SNAPSHOT.REFRESH( 'v_alper_ss','f');
end;
/
f -> fast refresh
? -> force refresh
c -> complete refresh
a -> always refresh
DBMS_SNAPSHOT.REFRESH( 'v_alper_ss','f');
end;
/
f -> fast refresh
? -> force refresh
c -> complete refresh
a -> always refresh
Labels:
materialized views,
snapshot,
ss
Monday, 24 October 2016
How to define/check/drop jobs?
Define a new job (will run every hour)
DECLARE
X NUMBER;
BEGIN
BEGIN
SYS.DBMS_JOB.SUBMIT (
job => X,
what => 'HR.P_COMPILE_INVALID_OBJECTS;',
next_date => TO_DATE ('24.10.2016 10:00:00',
'dd/mm/yyyy hh24:mi:ss'),
interval => '(SYSDATE)+1/24',
no_parse => FALSE);
SYS.DBMS_OUTPUT.PUT_LINE ('Job Number is: ' || TO_CHAR (x));
EXCEPTION
WHEN OTHERS
THEN
RAISE;
END;
COMMIT;
END;
Check job parameter
SQL> select * from v$parameter where name like '%job%';
if value is "0":
SQL> alter system set job_queue_processes=20;
Add this setting into init.ora file:
*.job_queue_processes=20
Info: If job_queue_processes parameter value is bigger than "1", jobs runs automatically such as refresh materialized view job. On the other hand, user can run this job manually as well while job_queue_processes="0".
Check current job status
SQL> select * from user_jobs;
or
SQL> select * from dba_jobs;
Check running jobs
SQL> select * from dba_jobs_running;
-- take a note that sid output (e.g. 2283)
Get detail info about running job
SQL> SELECT s.username, osuser, process machine, p.terminal, type, sid, s.serial#, s.program
FROM v$session s, v$process p
WHERE s.sid = 2283
AND s.paddr = p.addr;
-- s.program; CQJ0 (Job queue controller), J000, J001 ... J999 (Job Queue)
-- These are called as background processes
Drop the job with sys or system user (as sysdba)
-- Before do this, make sure related job is broken, no enq. tx row lock session on SYS.JOB$ table that killed.
SQL>
SELECT DISTINCT
'ALTER SYSTEM KILL SESSION '''
|| b.sid
|| ','
|| b.serial#
|| ''' IMMEDIATE;'
AS KILL_THEM_ALL
FROM sys.dba_dml_locks a, sys.v_$session B
WHERE B.SID = a.session_id
AND owner NOT IN ('WMSYS',
'CTXSYS',
'DBSNMP')
AND B.TYPE <> 'BACKGROUND'
AND name = 'JOB$';
-- Run that
SQL>
begin
sys.dbms_job.broken(1111,true); -- 1111 -> the job id
commit;
end;
SQL>
begin
sys.dbms_job.remove(1111); -- 1111 -> the job id
commit;
end;
Dynamic SQL for dropping jobs with sys or system user (as sysdba)
-- Before do this, make sure related job is broken and session is killed.
-- Run that
SQL>
SELECT 'begin sys.dbms_ijob.broken(' || job || ', true); commit; end;' || chr(10) || '/' AS DROP_JOBS
FROM dba_jobs;
SQL>
FROM dba_jobs;
-- ijob uses for all jobs in dbms_scheduler (new version of dbms_jobs)
Info: Please click this for P_COMPILE_INVALID_OBJECTS
DECLARE
X NUMBER;
BEGIN
BEGIN
SYS.DBMS_JOB.SUBMIT (
job => X,
what => 'HR.P_COMPILE_INVALID_OBJECTS;',
next_date => TO_DATE ('24.10.2016 10:00:00',
'dd/mm/yyyy hh24:mi:ss'),
interval => '(SYSDATE)+1/24',
no_parse => FALSE);
SYS.DBMS_OUTPUT.PUT_LINE ('Job Number is: ' || TO_CHAR (x));
EXCEPTION
WHEN OTHERS
THEN
RAISE;
END;
COMMIT;
END;
Check job parameter
SQL> select * from v$parameter where name like '%job%';
if value is "0":
SQL> alter system set job_queue_processes=20;
Add this setting into init.ora file:
*.job_queue_processes=20
Info: If job_queue_processes parameter value is bigger than "1", jobs runs automatically such as refresh materialized view job. On the other hand, user can run this job manually as well while job_queue_processes="0".
Check current job status
SQL> select * from user_jobs;
or
SQL> select * from dba_jobs;
Check running jobs
SQL> select * from dba_jobs_running;
-- take a note that sid output (e.g. 2283)
Get detail info about running job
SQL> SELECT s.username, osuser, process machine, p.terminal, type, sid, s.serial#, s.program
FROM v$session s, v$process p
WHERE s.sid = 2283
AND s.paddr = p.addr;
-- s.program; CQJ0 (Job queue controller), J000, J001 ... J999 (Job Queue)
-- These are called as background processes
Drop the job with sys or system user (as sysdba)
-- Before do this, make sure related job is broken, no enq. tx row lock session on SYS.JOB$ table that killed.
SQL>
SELECT DISTINCT
'ALTER SYSTEM KILL SESSION '''
|| b.sid
|| ','
|| b.serial#
|| ''' IMMEDIATE;'
AS KILL_THEM_ALL
FROM sys.dba_dml_locks a, sys.v_$session B
WHERE B.SID = a.session_id
AND owner NOT IN ('WMSYS',
'CTXSYS',
'DBSNMP')
AND B.TYPE <> 'BACKGROUND'
AND name = 'JOB$';
-- Run that
SQL>
begin
sys.dbms_job.broken(1111,true); -- 1111 -> the job id
commit;
end;
SQL>
SELECT DISTINCT
'ALTER SYSTEM KILL SESSION '''
|| b.sid
|| ','
|| b.serial#
|| ''' IMMEDIATE;'
AS KILL_THEM_ALL
FROM sys.v_$session B
WHERE b.sid = 2283;
SQL>'ALTER SYSTEM KILL SESSION '''
|| b.sid
|| ','
|| b.serial#
|| ''' IMMEDIATE;'
AS KILL_THEM_ALL
FROM sys.v_$session B
WHERE b.sid = 2283;
begin
sys.dbms_job.remove(1111); -- 1111 -> the job id
commit;
end;
Dynamic SQL for dropping jobs with sys or system user (as sysdba)
-- Before do this, make sure related job is broken and session is killed.
-- Run that
SQL>
SELECT 'begin sys.dbms_ijob.broken(' || job || ', true); commit; end;' || chr(10) || '/' AS DROP_JOBS
FROM dba_jobs;
SQL>
SELECT DISTINCT
SELECT 'begin sys.dbms_ijob.remove(' || job || '); commit; end;' || chr(10) || '/' AS DROP_JOBS
'ALTER SYSTEM KILL SESSION '''
|| b.sid
|| ','
|| b.serial#
|| ''' IMMEDIATE;'
AS KILL_THEM_ALL
FROM sys.v_$session B, dba_jobs_running A
WHERE a.sid = b.sid;
SQL>
SQL>
FROM dba_jobs;
-- ijob uses for all jobs in dbms_scheduler (new version of dbms_jobs)
Labels:
job,
ob_queue_processes,
oracle
Friday, 21 October 2016
How to compile invalid objects with custom procedure?
CREATE OR REPLACE PROCEDURE P_COMPILE_INVALID_OBJECTS
AUTHID CURRENT_USER
IS
str_exec VARCHAR2 (200);
total_inv NUMBER;
err_code VARCHAR2 (200);
err_msg VARCHAR2 (200);
BEGIN
/*Author: Alper Özsöyler */
FOR cur_object IN ( SELECT owner, object_name, object_type
FROM dba_objects
WHERE status = 'INVALID'
ORDER BY owner)
LOOP
str_exec := 'ALTER ';
IF cur_object.object_type = 'PACKAGE BODY'
THEN
str_exec := str_exec || 'PACKAGE ';
ELSE
str_exec := str_exec || cur_object.object_type || ' ';
END IF;
str_exec :=
str_exec
|| cur_object.owner
|| '.'
|| cur_object.object_name
|| ' COMPILE ';
IF cur_object.object_type = 'PACKAGE BODY'
THEN
str_exec := str_exec || 'BODY ';
END IF;
BEGIN
EXECUTE IMMEDIATE str_exec;
EXCEPTION
WHEN OTHERS
THEN
DBMS_OUTPUT.put_line (
cur_object.owner
|| '.'
|| cur_object.object_name
|| ' ('
|| cur_object.object_type
|| ') '
|| '-> still invalid!');
err_code := SQLCODE;
err_msg := SUBSTR (SQLERRM, 1, 200);
INSERT INTO t_cmp_invalid_objects_log
VALUES (s_cpl_inv_objects_error_log.NEXTVAL,
cur_object.object_name,
cur_object.owner,
cur_object.object_type,
err_code || ' -ERROR- ' || err_msg);
COMMIT;
END;
END LOOP;
str_exec := 'select count(*) FROM dba_objects
WHERE status = ''INVALID''';
EXECUTE IMMEDIATE str_exec INTO total_inv;
DBMS_OUTPUT.put_line ('Total invalid number: ' || total_inv);
END;
/
DROP TABLE t_cmp_invalid_objects_log
/
CREATE TABLE t_cmp_invalid_objects_log
(
log_id INT PRIMARY KEY,
object_name VARCHAR2 (100),
object_schema VARCHAR2 (100),
object_type VARCHAR2 (100),
error VARCHAR2 (1000)
)
/
DROP SEQUENCE s_cpl_inv_objects_error_log
/
CREATE SEQUENCE s_cpl_inv_objects_error_log MINVALUE 0
START WITH 0
INCREMENT BY 1
CACHE 5;
Usage:
exec P_COMPILE_INVALID_OBJECTS();
Check error logs:
SQL> show error;
or
SQL> SELECT *
FROM ALL_ERRORS
WHERE OWNER = USER;
AUTHID CURRENT_USER
IS
str_exec VARCHAR2 (200);
total_inv NUMBER;
err_code VARCHAR2 (200);
err_msg VARCHAR2 (200);
BEGIN
/*Author: Alper Özsöyler */
FOR cur_object IN ( SELECT owner, object_name, object_type
FROM dba_objects
WHERE status = 'INVALID'
ORDER BY owner)
LOOP
str_exec := 'ALTER ';
IF cur_object.object_type = 'PACKAGE BODY'
THEN
str_exec := str_exec || 'PACKAGE ';
ELSE
str_exec := str_exec || cur_object.object_type || ' ';
END IF;
str_exec :=
str_exec
|| cur_object.owner
|| '.'
|| cur_object.object_name
|| ' COMPILE ';
IF cur_object.object_type = 'PACKAGE BODY'
THEN
str_exec := str_exec || 'BODY ';
END IF;
BEGIN
EXECUTE IMMEDIATE str_exec;
EXCEPTION
WHEN OTHERS
THEN
DBMS_OUTPUT.put_line (
cur_object.owner
|| '.'
|| cur_object.object_name
|| ' ('
|| cur_object.object_type
|| ') '
|| '-> still invalid!');
err_code := SQLCODE;
err_msg := SUBSTR (SQLERRM, 1, 200);
INSERT INTO t_cmp_invalid_objects_log
VALUES (s_cpl_inv_objects_error_log.NEXTVAL,
cur_object.object_name,
cur_object.owner,
cur_object.object_type,
err_code || ' -ERROR- ' || err_msg);
COMMIT;
END;
END LOOP;
str_exec := 'select count(*) FROM dba_objects
WHERE status = ''INVALID''';
EXECUTE IMMEDIATE str_exec INTO total_inv;
DBMS_OUTPUT.put_line ('Total invalid number: ' || total_inv);
END;
/
DROP TABLE t_cmp_invalid_objects_log
/
CREATE TABLE t_cmp_invalid_objects_log
(
log_id INT PRIMARY KEY,
object_name VARCHAR2 (100),
object_schema VARCHAR2 (100),
object_type VARCHAR2 (100),
error VARCHAR2 (1000)
)
/
DROP SEQUENCE s_cpl_inv_objects_error_log
/
CREATE SEQUENCE s_cpl_inv_objects_error_log MINVALUE 0
START WITH 0
INCREMENT BY 1
CACHE 5;
Usage:
exec P_COMPILE_INVALID_OBJECTS();
Check error logs:
SQL> show error;
or
SQL> SELECT *
FROM ALL_ERRORS
WHERE OWNER = USER;
Wednesday, 19 October 2016
How to drop invalid objects from database?
Specify invalid objects
SQL>
SELECT *
FROM dba_objects
WHERE status = 'INVALID'
ORDER BY owner;
Drop invalid objects
SQL>
SELECT 'DROP ' || object_type || ' ' || owner || '.' || object_name || ';'
FROM dba_objects
WHERE status = 'INVALID'
ORDER BY owner;
SQL>
SELECT *
FROM dba_objects
WHERE status = 'INVALID'
ORDER BY owner;
Drop invalid objects
SQL>
SELECT 'DROP ' || object_type || ' ' || owner || '.' || object_name || ';'
FROM dba_objects
WHERE status = 'INVALID'
ORDER BY owner;
Subscribe to:
Posts (Atom)