Benutzer-Werkzeuge

Webseiten-Werkzeuge


dba:datapump_oracle_export_plsql

Oracle 12c - Datapump Export mit PL/SQL - Import/Export aus der Datenbank starten

DB: 11g / 12g / 18c

Aufgabe: Ein Schema soll zwischen zwei Datenbanken regelmäßig „kopiert“/„abgeglichen“ werden.

Lösung:

PL/SQL Code steuert DataPump direkt aus der Datenbank heraus, damit ist kein Zugriff auf das Betriebssystem für den User notwendig.

Das Kopieren erfolgt dabei über einen Datenbank Link.

Kompletter Source Code in der aktuellesten Version siehe hier ⇒ https://github.com/gpipperr/OraPowerShell/tree/master/Ora_SQLPlus_SQLcL_sql_scripts/datapump


Datapump Import über das Netz

Über einen DB Link aus dem Zielsystem das Quellsystem abfragen und dort die Daten einfügen:

 |--------|                    |--------|
 | GPITST |    <==== DB Link   | GPIPRD |
 |--------|                    |--------|
  source                    destination  

Der Export erfolgt von der „source database“ in die „destination database“, d.h. der DB Link wird in der „destination database“ auf die „source database“ angelegt!.

Zu Datapump siehe auch ⇒ Oracle Data Pump Schema Export und Import


Umsetzung

Test Schemas anlegen

Source DB GPITEST

CREATE USER BESTDBA IDENTIFIED BY "xxxxxxxx";
GRANT CONNECT, resource TO BESTDBA
 
GRANT DATAPUMP_EXP_FULL_DATABASE  TO BESTDBA
 
CONNECT BESTDBA/"xxxxxxxx"
 
-- test daten erzeugen
 
SQL> CREATE TABLE t_all_objects AS SELECT * FROM all_objects;
 
TABLE created.
 
SQL> SELECT COUNT(*) FROM t_all_objects;
 
  COUNT(*)
----------
     86820

Target/Destination DB GPIPROD

CREATE USER BESTDBA IDENTIFIED BY "xxxxxxx";
GRANT CONNECT, resource TO BESTDBA;
 
GRANT DATAPUMP_IMP_FULL_DATABASE  TO BESTDBA;
GRANT CREATE TABLE, CREATE PROCEDURE TO bestdba;
 
 
CREATE directory BACKUP AS "/opt/oracle/acfs/import";
 
GRANT READ,WRITE ON directory BACKUP TO BESTDBA;
 
CONNECT BESTDBA/"xxxxxxxx"
 
CREATE DATABASE LINK DP_TRANSFER CONNECT TO BESTDBA IDENTIFIED BY "xxxxxxx" USING 'GPITSTDB';
 
 
SQL> SELECT global_name FROM global_name@DP_TRANSFER;
 
GLOBAL_NAME
--------------------------------------------------------------------------------
GPITST

Bzw. bestehenden User wie folgt anpassen!


User Rechte vergeben

Rollen für den Export/Import der Datenbank vergeben:

  • DATAPUMP_EXP_FULL_DATABASE
  • DATAPUMP_IMP_FULL_DATABASE

Auf Source Database:

GRANT DATAPUMP_EXP_FULL_DATABASE  TO BESTDBA;

Auf Target Database:

GRANT DATAPUMP_IMP_FULL_DATABASE  TO BESTDBA;
GRANT CREATE TABLE, CREATE PROCEDURE TO bestdba;

Der Export erfolgt von der „source database“ in die „destination database“, d.h. der DB Link wird in der „destination database“ auf die „source database“ angelegt!.

Anlegen nach folgenden Muster unter dem User der auch den Export durchführen soll:

CREATE DATABASE LINK mylink CONNECT TO remote_user IDENTIFIED BY remote_pwd USING 'remote_db';

Ist das Password des Users nicht bekannt den DB Link anlegen über eine Hilfsfunktion anlegen: Einen DB Link in einem anderem Schema anlegen

In einer Oracle Cluster Umgebung darauf achten,das der TNS alias auch in der TNSNAMES.ora des Clusters ( wie User „Grid“) steht!


DB Directory für das Log angelegen

Damit wir später auch das Log lesen können importieren wir das Log als external Table.

Directory anlegen (im Cluster darauf achten, das es auch vom Grid User geschrieben werden kann und von beiden Seiten des Clusters erreicht wird!) :

CREATE directory import AS "/opt/oracle/acfs/import";
 
GRANT READ,WRITE ON directory import TO bestdba;

Rechte an dem Directory dem User geben!


PL/SQL Code für den Start von DataPump -das komplette Schema importieren

Mit diesem Code kann dann ein ganzes Schema zu synchronisieren:

CREATE OR REPLACE PROCEDURE dp_import_user_schema
IS
  v_dp_handle   NUMBER;
  PRAGMA AUTONOMOUS_TRANSACTION;
 
  v_db_directory varchar2(200):='BACKUP';
  v_db_link varchar2(200):='DP_TRANSFER';
  v_job_name varchar2(256):=USER ||'_IMPORT' || TO_CHAR (SYSDATE, 'DD_HH24');
  --v_log_file_name varchar2(256):=user||'_' || TO_CHAR (SYSDATE, 'YYYYMMDD-HH24MISS') || '.log';
  v_log_file_name varchar2(256):='db_import_plsql.log';
 
BEGIN
 
  dbms_output.put_line(' -- Import Parameter ------------' );
  dbms_output.put_line(' -- DB Link      :: '|| v_db_link  );
  dbms_output.put_line(' -- DB DIRECTORY :: '|| v_db_directory);
  dbms_output.put_line(' -- DP JOB Name  :: '|| v_job_name);
  dbms_output.put_line(' -- DP Log File  :: '|| v_log_file_name);
 
 
 
  -- Create Data Pump Handle - "IMPORT" in this case
  v_dp_handle := DBMS_DATAPUMP.open  (operation => 'IMPORT'
	                  , job_mode    => 'SCHEMA'
					  , job_name    => v_job_name
					  , remote_link => v_db_link);
 
  -- No PARALLEL
  DBMS_DATAPUMP.set_parallel (handle => v_dp_handle, degree => 1);
 
  -- consistent EXPORT
  -- Consistent to the start of the export with the timestamp of systimestamp
  --
  DBMS_DATAPUMP.SET_PARAMETER(
    handle       =>  v_dp_handle
   , name         => 'FLASHBACK_TIME'
   , VALUE        => 'systimestamp'
   );
 
 
  -- impprt the complete schema Filter
  DBMS_DATAPUMP.metadata_filter (handle => v_dp_handle
                                , name => 'SCHEMA_EXPR'
								 , VALUE => 'IN ('''||USER||''')');
 
 
  -- Logfile
  DBMS_DATAPUMP.add_file (handle      => v_dp_handle
                         ,filename    => v_log_file_name
                         ,filetype    => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE
                         ,directory   => v_db_directory
                         ,reusefile   => 1   -- overwrite existing files
                         ,filesize    => '10000M');
 
 
  -- Do it!
  DBMS_DATAPUMP.start_job (handle => v_dp_handle);
 
  COMMIT;
 
  DBMS_DATAPUMP.detach (handle => v_dp_handle);
 
END dp_import_user_schema;
/

Nur eine einzelne Tabelle importieren

Code um eine Tabelle Schema zu importieren:

CREATE OR REPLACE PROCEDURE dp_import_table(p_tablename varchar2
                                          , p_mode varchar2)
 
--- +----------------------------------
--
-- testcall exec db_import_table(p_tablename => 'T_ALL_OBJECTS', p_mode=> 'REPLACE')
--
-- +----------------------------------										  
 
IS
  v_dp_handle   NUMBER;
  PRAGMA AUTONOMOUS_TRANSACTION;
 
  v_db_directory varchar2(200):='BACKUP';
  v_db_link varchar2(200):='DP_TRANSFER';
  v_job_name varchar2(256):=USER ||'_IMPORT' || TO_CHAR (SYSDATE, 'DD_HH24');
  --v_log_file_name varchar2(256):=user||'_' || TO_CHAR (SYSDATE, 'YYYYMMDD-HH24MISS') || '.log';
  -- use same name to import the data later via external table
  v_log_file_name varchar2(256):='db_import_plsql.log';
 
BEGIN
 
  dbms_output.put_line(' -- Import Parameter ------------' );
 
 
  dbms_output.put_line(' -- Tablename      :: '|| p_tablename  );
  dbms_output.put_line(' -- Replace Modus  :: '|| p_mode);
 
 
  dbms_output.put_line(' -- DB Link       :: '|| v_db_link  );
  dbms_output.put_line(' -- DB DIRECTORY  :: '|| v_db_directory);
  dbms_output.put_line(' -- DP JOB Name   :: '|| v_job_name);
  dbms_output.put_line(' -- DP Log File   :: '|| v_log_file_name);
 
 
  IF UPPER(p_mode) NOT IN ('TRUNCATE', 'REPLACE', 'APPEND', 'SKIP') THEN
      RAISE_APPLICATION_ERROR (-20000, '-- Error :: This Tablemode is not supported ::'||p_mode);
  END IF;
 
 
  -- Create Data Pump Handle - "IMPORT" in this case
  v_dp_handle := DBMS_DATAPUMP.open  (operation => 'IMPORT'
	                  , job_mode    => 'TABLE'
					  , job_name    => v_job_name
					  , remote_link => v_db_link);
 
  -- No PARALLEL
  DBMS_DATAPUMP.set_parallel (handle => v_dp_handle, degree => 1);
 
  -- consistent EXPORT
  -- Consistent to the start of the export with the timestamp of systimestamp
  --
  DBMS_DATAPUMP.SET_PARAMETER(
    handle       =>  v_dp_handle
   , name         => 'FLASHBACK_TIME'
   , VALUE        => 'systimestamp'
   );
 
  -- TABLE_EXISTS_ACTION -- : TRUNCATE, REPLACE, APPEND, and SKIP.
  DBMS_DATAPUMP.SET_PARAMETER(
     handle       =>  v_dp_handle
   , name         => 'TABLE_EXISTS_ACTION'
   , VALUE        => UPPER(p_mode)
  );
 
 
  --import only this table
 
   DBMS_DATAPUMP.metadata_filter ( handle =>  v_dp_handle
                                , name  =>  'NAME_EXPR'
								, VALUE =>  'IN ('''||p_tablename||''')');
 
 
  -- impprt from this Schema
 
  DBMS_DATAPUMP.metadata_filter (handle => v_dp_handle
                                , name  => 'SCHEMA_EXPR'
								, VALUE => 'IN ('''||USER||''')');
 
 
  -- Logfile
  DBMS_DATAPUMP.add_file (handle      => v_dp_handle
                         ,filename    => v_log_file_name
                         ,filetype    => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE
                         ,directory   => v_db_directory
                         ,reusefile   => 1     -- overwrite existing files
                         ,filesize    => '10000M');
 
   -- Do it!
  DBMS_DATAPUMP.start_job (handle => v_dp_handle);
 
  COMMIT;
 
  DBMS_DATAPUMP.detach (handle => v_dp_handle);
 
END dp_import_table;
/

Probleme bei der Entwicklung

Problem ORA-31626: job does not exist

Beim ersten Aufruf erhalte ich den Fehler „ORA-31626: job does not exist“.

Fehler:

ERROR at line 1:
 
ORA-31626: job does NOT exist
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 1137
ORA-06512: at "SYS.DBMS_DATAPUMP", line 5285

Rechte?

Lösung: – Target DB

 GRANT CREATE TABLE, CREATE PROCEDURE TO bestdba;

Der User benötigt auch die direkten Grants, eine Rolle ist nicht ausreichend, ist ja PL/SQL im Hintergrund!

siehe auch https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:9532917900346934390

Problem ORA-31626: job does not exist

Fehler:

ORA-39001: invalid argument VALUE
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3507
ORA-06512: at "SYS.DBMS_DATAPUMP", line 5296

Im ersten Schritt den DB Link nochmals prüfen, Name des DB Links im Script falsch vergeben 8-O.

Siehe auch ⇒ Error ORA-39001 When Using DBMS_DATAPUMP API Over A Network Link (Doc ID 1160207.1)

Problem DBMS_DATAPUMP.ATTACH ORA-31626: job does not exist

Ein Job läßt sich nicht mehr löschen, steht im Status „DEFINING“, nun was tun??

Diese bestehende Session beenden und neu anmelden!

Schlägt etwas fehl, ist das Handle blockiert! Einfach abmelden und wieder anmelden und schon ist alles wieder gut!


Datapump Log File in der DB auslesen

Für das Auslese des Logfiles wird eine external Tabelle eingesetzt.

DROP TABLE DP_DUMP_LOG;
 
CREATE TABLE DP_DUMP_LOG (
  log_line      VARCHAR2(4000)
)
ORGANIZATION EXTERNAL (
  TYPE ORACLE_LOADER
  DEFAULT DIRECTORY backup
  ACCESS PARAMETERS (
    RECORDS DELIMITED BY NEWLINE
    FIELDS TERMINATED BY ','
    MISSING FIELD VALUES ARE NULL
    (
      log_line      CHAR(4000)
    )
  )
  LOCATION ('db_import_plsql.log')
)
PARALLEL 1
REJECT LIMIT UNLIMITED;

Auswerten mit:

SELECT * FROM DP_DUMP_LOG;
 
 
Starting "BESTDBA"."BESTDBA_IMPORT16_09":
Estimate IN progress USING BLOCKS method...
Processing object TYPE TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation USING BLOCKS method: 10 MB
Processing object TYPE TABLE_EXPORT/TABLE/TABLE
. . imported "BESTDBA"."T_ALL_OBJECTS"                    86820 ROWS
Processing object TYPE TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "BESTDBA"."BESTDBA_IMPORT16_09" successfully completed at Sat Feb 16 09:24:44 2019 elapsed 0 00:00:03

Job in der DB wieder abbrechen

CREATE OR REPLACE PROCEDURE dp_import_stop_job(p_job_name VARCHAR2)
IS					  
--- +----------------------------------
--
-- testcall exec db_import_stop_job(p_job_name => 'MY_JOB')
--
-- +----------------------------------										  
  v_dp_handle   NUMBER;
 
  CURSOR c_act_jobs IS 
 
  SELECT job_name
		 ,  operation
		 ,  job_mode
		 ,  state
		 ,  attached_sessions
	FROM user_datapump_jobs
	WHERE job_name NOT LIKE 'BIN$%'
		ORDER BY 1,2
  ;
 
 
  v_job_exits BOOLEAN:=FALSE;
  v_job_mode VARCHAR2(32);
  v_job_state VARCHAR2(32);
  v_real_job_name VARCHAR2(32);
  v_count PLS_INTEGER;
 
  v_sts ku$_Status;
  v_job_run_state VARCHAR2(2000);
 
BEGIN
 
  DBMS_OUTPUT.put_line(' -- Stop the Job  Parameter ------------' );
  DBMS_OUTPUT.put_line(' -- p_job_name      :: '|| p_job_name  );
 
  -- query all actual jobs
  -- to show a list of candidates if job_name is wrong
  --
  FOR rec IN  c_act_jobs 
  LOOP  
	IF rec.job_name = UPPER(p_job_name) THEN
		v_job_exits:=TRUE;
		v_real_job_name:=rec.job_name;
		v_job_mode:=rec.job_mode;
		v_job_state:=rec.state;
	ELSE
		v_job_exits:=FALSE;
   END IF;
    DBMS_OUTPUT.put_line('--- Found this Job :: ' ||rec.job_name );
    DBMS_OUTPUT.put_line('+-- Operation      :: ' ||rec.operation );
	DBMS_OUTPUT.put_line('+-- Mode           :: ' ||rec.job_mode );
	DBMS_OUTPUT.put_line('+-- State          :: ' ||rec.state );
	DBMS_OUTPUT.put_line('+-- Sessions       :: ' ||rec.attached_sessions );
  END LOOP;  
 
   IF v_job_exits THEN 
 
 
	 BEGIN
	 		-- Create Data Pump Handle - "ATTACH" in this case
			v_dp_handle := DBMS_DATAPUMP.ATTACH(
					 job_name    => v_real_job_name
					,job_owner   => USER); 
 
	  EXCEPTION 
		WHEN DBMS_DATAPUMP.NO_SUCH_JOB THEN
		  -- check if the old job table exits
		  SELECT COUNT(*) INTO v_count FROM user_tables WHERE UPPER(table_name) = UPPER(v_real_job_name);
		  IF v_count > 0 THEN
			EXECUTE IMMEDIATE 'drop table '||USER||'."'||v_real_job_name||'"';
		  END IF;
 
		  RAISE_APPLICATION_ERROR (-20003, '-- Error :: Job Not running anymore, check for other errors - no mastertable  for '||p_job_name || ' get Error '||SQLERRM);
 
	    WHEN OTHERS THEN
	       RAISE_APPLICATION_ERROR (-20002, '-- Error :: Not possible to attach to the job - Error :: '||SQLERRM);
	  END;
 
 
		IF  v_job_state IN ('DEFINING') THEN
 
			-- check if the job is in the defining state!
			-- abnormal situation, normal stop not possible
			-- use DBMS_DATAPUMP.START_JOB to restart the job
 
			DBMS_DATAPUMP.START_JOB  ( 	handle    => v_dp_handle );
 
 
		END IF;	  
 
		-- print the status
 
		 dbms_datapump.get_status (handle => v_dp_handle
		                    , mask       => dbms_datapump.KU$_STATUS_WIP
                            , timeout    => 0
                            , job_state  => v_job_run_state
                            , status     => v_sts
							);
 
		DBMS_OUTPUT.put_line('+-- Akt State       :: ' ||v_job_run_state );		
 
 
		-- Stop the job
		DBMS_DATAPUMP.STOP_JOB (
			handle    => v_dp_handle
			, IMMEDIATE   => 1 			-- stop now
			, keep_master => NULL  		-- delete Master table
			, delay       => 5          -- wait 5 seconds before kill for other sessions
		);  		
 
	ELSE
      RAISE_APPLICATION_ERROR (-20000, '-- Error :: This job name not found::'||p_job_name);
	END IF;	  
 
END dp_import_stop_job;
 
/ 
 
 

Jobs in der DB kontrollieren

siehe auch https://github.com/gpipperr/OraPowerShell/blob/master/Ora_SQLPlus_SQLcL_sql_scripts/datapump.sql

--==============================================================================
-- GPI - Gunther Pippèrr
-- Desc:   Get Information about running data pump jobs
-- Date:   November 2013
--==============================================================================
SET linesize 130 pagesize 300 
 
COLUMN owner_name format a10;
COLUMN job_name   format a20
COLUMN state      format a12
 
COLUMN operation LIKE state
COLUMN job_mode LIKE state
 
ttitle  "Datapump Jobs"  SKIP 2
 
 
SELECT owner_name
    ,  job_name
	 ,  operation
	 ,  job_mode
	 ,  state
	 ,	 attached_sessions
FROM dba_datapump_jobs
WHERE job_name NOT LIKE 'BIN$%'
ORDER BY 1,2
/
 
ttitle  "Datapump Master Table"  SKIP 2
 
 
COLUMN STATUS       format a10;
COLUMN object_id    format 99999999
COLUMN object_type  format a12
COLUMN OBJECT_NAME  format a25
 
SELECT o.status 
     , o.object_id
	 , o.object_type
	 , o.owner||'.'||object_name AS OBJECT_NAME
FROM dba_objects o
   , dba_datapump_jobs j
WHERE o.owner=j.owner_name 
  AND o.object_name=j.job_name
  AND j.job_name NOT LIKE 'BIN$%' ORDER BY 4,2
/  
 
ttitle off
 
prompt ... 
prompt ... CHECK FOR "NOT RUNNING" Jobs
prompt ... 

Quellen

Code Beispiel für einen "normalen" DataPump Export Aufruf

-- Export GPIDB Database
SET SERVEROUTPUT ON
 
ACCEPT export_dir  CHAR PROMPT 'Enter the name for the directory for the export of the database:'
 
DECLARE
  CURSOR c_dir (P_DIRNAME VARCHAR2)
  IS
    SELECT DIRECTORY_PATH
      FROM dba_directories
     WHERE DIRECTORY_NAME = P_DIRNAME;
 
  v_dir   dba_directories.DIRECTORY_PATH%TYPE;
BEGIN
  DBMS_OUTPUT.put_line ('check for directory GPIDB_EXPORT');
 
  OPEN c_dir ('GPIDB_EXPORT');
 
  FETCH c_dir INTO v_dir;
 
  IF SQL%NOTFOUND
  THEN
    DBMS_OUTPUT.put_line ('create directory GPIDB_EXPORT');
 
    EXECUTE IMMEDIATE 'create directory GPIDB_export as ''/orabackup''';
  ELSE
    IF v_dir NOT LIKE '&&export_dir'
    THEN
      DBMS_OUTPUT.put_line ('relink directory GPIDB_EXPORT');
 
      EXECUTE IMMEDIATE 'drop directory GPIDB_export';
 
      EXECUTE IMMEDIATE 'create directory GPIDB_export as ''/orabackup''';
    END IF;
  END IF;
 
  CLOSE c_dir;
END;
/
 
SELECT DIRECTORY_PATH
  FROM dba_directories
 WHERE DIRECTORY_NAME = 'GPIDB_EXPORT';
 
--- Start datapump to export the database
 
CREATE OR REPLACE PROCEDURE db_export_GPIDB
IS
  v_dp_handle   NUMBER;
  PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
  -- Create Data Pump Handle - "TABLE EXPORT" in this case
  v_dp_handle :=
    DBMS_DATAPUMP.open (operation => 'EXPORT', job_mode => 'SCHEMA', job_name => 'GPIDB_EXPORT2' || TO_CHAR (SYSDATE, 'DD_HH24'));
 
  DBMS_DATAPUMP.set_parallel (handle => v_dp_handle, degree => 4);
 
  -- Export the complete schema
  DBMS_DATAPUMP.metadata_filter (handle => v_dp_handle, name => 'SCHEMA_EXPR', VALUE => 'IN (''GPIDB'')');
 
  -- Specify target file - make it unique with a timestamp
  DBMS_DATAPUMP.add_file (handle      => v_dp_handle
                         ,filename    => 'GPIDB_' || TO_CHAR (SYSDATE, 'YYYYMMDD-HH24MISS') || '%U.dmp'
                         ,directory   => 'GPIDB_EXPORT'
                         ,reusefile   => 1                                                             -- overwrite existing files
                         ,filesize    => '50000M');
 
  -- Logfile
  DBMS_DATAPUMP.add_file (handle      => v_dp_handle
                         ,filename    => 'GPIDB_' || TO_CHAR (SYSDATE, 'YYYYMMDD-HH24MISS') || '.log'
                         ,filetype    => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE
                         ,directory   => 'GPIDB_EXPORT'
                         ,reusefile   => 1                                                             -- overwrite existing files
                         ,filesize    => '10000M');
 
  --    MERGE => that each partitioned table is re-created in the target database as an unpartitioned table
  -- DBMS_DATAPUMP.set_parameter (handle => v_dp_handle, name => 'PARTITION_OPTIONS', VALUE => 'MERGE');
 
  -- Do it!
  DBMS_DATAPUMP.start_job (handle => v_dp_handle);
 
  COMMIT;
--    DBMS_DATAPUMP.detach (handle => v_dp_handle);
END;
/
 
BEGIN
  DBMS_OUTPUT.put_line (' create export at ' || TO_CHAR (SYSDATE, 'dd.mm HH24:MI'));
  db_export_GPIDB;
END;
/
 
PROMPT "to attach to the job please use:"
 
SELECT 'expdp "''/ as sysdba''" attach=GPIDB_EXPORT2' || TO_CHAR (SYSDATE, 'DD_HH24') FROM DUAL;
 
TTITLE OFF
Diese Website verwendet Cookies. Durch die Nutzung der Website stimmen Sie dem Speichern von Cookies auf Ihrem Computer zu. Außerdem bestätigen Sie, dass Sie unsere Datenschutzbestimmungen gelesen und verstanden haben. Wenn Sie nicht einverstanden sind, verlassen Sie die Website.Weitere Information
dba/datapump_oracle_export_plsql.txt · Zuletzt geändert: 2019/02/16 10:53 von gpipperr