Improve performance in the DELETE command

Dúvidas, dicas e truques de PL/SQL. Aqui também vão assuntos relacionados a pacotes, triggers, funções, Java-Stored Procedures, etc
  

Poston Wed, 12 Jun 2013 4:02 pm

Guys IM deleting a table with 12 million record with the following command: DELETE FROM FLXSTG.CFTB_CONTRACT_CHARGES WHERE STG_BLK_CD = ' EU ' AND DT_INS_STG < to_date (' 6/5/13 ', ' dd/mm/YY ')--12.627163 the two fields are indices. It turns out that LAPD muuuuuuuuuuuuuito ... hours and hours.

does anyone have an idea what I can do to improve performance?
cleberz

Poston Thu, 13 Jun 2013 10:53 pm

Cleberz, as you mentioned, are 12 million records you are attempting with a single DELETE command. I do not believe that cleanliness is instant, especially if you are in ARCHIVE mode.

Maybe you could use COLLECTIONS to try to expedite the cleanup of this table. COLLECTIONS act as cursors and can expedite the processing of a massive volume of data below, follows a suggestion (draft) how could develop a routine with COLLECTIONS.

But you would need to do a little research on this topic in the manual of SQL or ORACLE PL/SQL.

ORACLE's Portuguese PRESS books on PL/SQL programming feature complete topics on this feature.

Code: Select all
DECLARE
  -- Cursor para identificar o rowid (endereco unico) a ser limpo na tabela
  -- Rowid seria um endereco unico que localiza o registro na instancia dados
  CURSOR CV_LIMPEZA IS
      SELECT ROWID FROM <TABELA> WHERE <CONDICOES>;
  -- Estou criando uma especie de array aqui
  TP_ROWID    IS TABLE OF CHAR(18);
  RY_ROWID    TP_ROWID;
BEGIN
  -- Abrindo cursor
  OPEN CV_LIMPEZA;
  -- Loop de limpeza principal
  LOOP
     -- Carregando no array 50000 registros (enderecos rowid)
     FETCH  CV_LIMPEZA BULK COLLECT INTO RY_ROWID LIMIT 50000;
     -- Comando FORALL para limpeza massiva dos registros
     FORALL I IN 1..RY_ROWID.COUNT
         DELETE FROM <TABELA> WHERE ROWID = RY_ROWID(I);
     COMMIT;
    -- Se chegou ao final do cursor, sai do LOOP
    EXIT WHEN CV_LIMPEZA%NOTFOUND;
  END LOOP;
  -- Fechando o cursor
  CLOSE CV_LIMPEZA;
END; 
Hugs, Sergio Coutinho
stcoutinho
Location: Sao Paulo - SP

Poston Fri, 14 Jun 2013 9:25 am

cleberz, the collection will not improve this problem. Pure SQL is always faster than SQL in a PL/SQL block.

What I see on his face that would do in your case it would index the columns in the WHERE clause, creating a composite index or preferably run parallel dml.

[] s
fbifabio
Location: São Paulo - SP

Fábio Prado
www.fabioprado.net

Poston Fri, 14 Jun 2013 3:09 pm

[color = #00BFBF]--Cira a table based on the records that you want to persist CREATE TABLE myTable2 AS (SELECT * FROM myTable WHERE ID IN (what do you want to keep))----pays the entire table with TRUNCATE TRUNCATE TABLE myTable;

-After deleting the table, you create an insert based on records preserved in your table of BKP (myTable2) INSERT INTO myTable (fields preserved) SELECT (preserved fields) FROM myTable2 [/color] I'm beginner in PL/SQL, more believe that would be possible.

If not try going by deleting parts.
ogj

Poston Wed, 19 Jun 2013 10:26 am

Thanks guys, I particionei the table and I'm deleting per partition with the following command: ALTER TABLE <tabela> TRUNCATE SUBPARTITION;


Thank you!
cleberz

Poston Mon, 29 Jul 2013 7:09 pm

Gentlemen Know that this case is already solved. The cleberz used TRUNCATE SUBPARTITION.
But we're going to analyze all options: Fábio cited above use a DELETE command directly.
We know that the GURU Tom Kyte has already said that it is always better to use a direct SQL command than using PLSQL. (that is, theoretically, a DELETE is ** always ** * better than using BULK).
MAAA ... In this case are 12 million lines. I think that depending on the size of the columns, a DELETE command couldn't delete it all. (down UNDO, etc)

We don't know how many rows are in the table altogether.
So in my opinion, would use the FORALL that Sergio suggested, or use the TRUNCATE.

1. where virtually all rows need to be erased, then the idea of TRUNCATE is good. (save the lines that you want to KEEP in another table, TRUNCATE the original, and then inserts the rows back).

2. But if the table has 200 million lines, then I think the FORALL/BULK is better.

As we see, each case is different. In the tuning, there is never a prompt response and quick. As I said the PAUL, " In tuning, what matters is the fastest. No matter how it is done ". Here the way is to try various things and measure time. :-o
dr_gori
Location: Seattle, WA, USA

Thomas F. G

Poston Fri, 01 Dec 2017 3:10 pm

stcoutinho wrote: Cleberz, as you mentioned, are 12 million of records you are attempting with a single DELETE command. I don't think cleaning is instant, especially if you are in ARCHIVE mode.

Maybe you could use COLLECTIONS to try to expedite the cleanup of this table. COLLECTIONS act as cursors and can expedite the processing of a massive volume of data below, follows a suggestion (draft) how could develop a routine with COLLECTIONS.

Code: Select all
DECLARE
  -- Cursor para identificar o rowid (endereco unico) a ser limpo na tabela
  -- Rowid seria um endereco unico que localiza o registro na instancia dados
  CURSOR CV_LIMPEZA IS
      SELECT ROWID FROM <TABELA> WHERE <CONDICOES>;
  -- Estou criando uma especie de array aqui
  TP_ROWID    IS TABLE OF CHAR(18);
  RY_ROWID    TP_ROWID;
BEGIN
  -- Abrindo cursor
  OPEN CV_LIMPEZA;
  -- Loop de limpeza principal
  LOOP
     -- Carregando no array 50000 registros (enderecos rowid)
     FETCH  CV_LIMPEZA BULK COLLECT INTO RY_ROWID LIMIT 50000;
     -- Comando FORALL para limpeza massiva dos registros
     FORALL I IN 1..RY_ROWID.COUNT
         DELETE FROM <TABELA> WHERE ROWID = RY_ROWID(I);
     COMMIT;
    -- Se chegou ao final do cursor, sai do LOOP
    EXIT WHEN CV_LIMPEZA%NOTFOUND;
  END LOOP;
  -- Fechando o cursor
  CLOSE CV_LIMPEZA;
END; 
Hugs, Sergio Coutinho
Hello Sergio, As I had a similar problem, although my question wasn't the performance, but the bursting of the tablespace temp/undo in customers where I had no rights to DBA, I used this script as the basis for exclusion of records, only checked that always left records that met the condition of the cursor CV_LIMPEZA.

Even in the latest version of Oracle (10 g, 11 g) value CV_LIMPEZA% NOTFOUND on BULK collections results true whenever the number of rows returned is less than the limit used, so the solution is to replace it with EXIT WHEN RY_ROWID. Count = 0.

Jayme Jeffman
jjeffman


  • See also
    Replies
    Views
    Last Post


        Return to PL/SQL

        Who is online

        Users browsing this forum: No registered users and 8 guests