Help-query Optimization

Tuning de Banco, Tuning de SQL, Ferramentas de tuning
  

Poston Tue, 05 May 2015 5:27 pm

Good afternoon.
I would like some help if possible for a possible tuning of a simple query, without filters.

The query is this:
Code: Select all
select * from sms_safx42_spo;
the problem with this query is that it returns around 10 million records, the result of it I'm recording to a file via utl_file.

Following execution plan:
Code: Select all
SELECT STATEMENT REMOTE, GOAL = ALL_ROWS   Id=0   Operation=SELECT STATEMENT   Cost=8105   CPU cost=3727389226   Bytes=1682697087
TABLE ACCESS FULL   Id=1   Operation=TABLE ACCESS   Cost=8105   CPU cost=3727389226   Bytes=1682697087
is there any way to improve it?

Detail: this table is accessed via db_link
marcus.asse

Poston Mon, 25 May 2015 3:23 pm

Good afternoon, my tip is you use a hint. The Hint that I'll suggest divides the processing in more than one processing core. In the example below I am putting on 2 cores.

Code: Select all
select /*+ parallel(2) */ * from sms_safx42_spo;
Passse please feedback.
Michel Pessoa

Poston Fri, 24 Jul 2015 8:33 am

It is possible that the biggest bottleneck is the recording of the file and not the query itself.
DanielNN
Location: Fortaleza - CE

att,

Daniel N.N.

Poston Mon, 27 Jul 2015 3:24 pm

Talk brother, blzinha?


I don't know if you'vê solved your problem (if so, post the solution here for us), however, it's worth taking a look at the case of db_link.

A tip would be you save this txt file, run the script in your " " and _spo, then transfer the txt. Would like?

If you can't, can you bring those records, of X in X lines in a local table, and this table resolve the file export?

Hug, cheezburger network
Trevisolli
Location: Araraquara - SP

Abraço,

Trevisolli
OCA Oracle PL/SQL Developer Certified Associate
OCP Oracle Forms Developer Certified Professional
Araraquara-SP

Poston Mon, 19 Oct 2015 12:31 pm

Maybe this problem has already been solved, but I'll leave my contribution, as the cheezburger network commented in the post above about bringing the records of X in X lines, I thought the use of bulk collect.
That way you can return multiple records in a single fetch, there's a catch, this feature uses more memory, so it is important to check what size X-lines bring for fetch.

The following is a sample code that I made:
Code: Select all
PROCEDURE TESTE AS
    --
    CURSOR curOrders is
      select order_id,
             order_date,
             order_mode,
             customer_id,
             order_status,
             order_total,
             sales_rep_id,
             promotion_id,
             warehouse_id,
             delivery_type,
             cost_of_delivery,
             wait_till_all_available,
             delivery_address_id,
             customer_class,
             card_id,
             invoice_address_id
      from   orders;
    --
    TYPE tblOrders is table of orders%ROWTYPE INDEX BY PLS_INTEGER;
    clOrders       tblOrders;
    --
    vLimit PLS_INTEGER := 4000;
    --
  BEGIN
    --
    open curOrders;
      --
      loop
        --
        fetch curOrders
        bulk  collect into clOrders
        limit vLimit;
        --
        dbms_output.put_line(to_char(clOrders.count));
        --
        exit when clOrders.count < vLimit;
        --
      end loop;
      --
    close curOrders;
    --
  END TESTE;
souldeath
Location: Campinas



Return to DBA Tuning

Who is online

Users browsing this forum: No registered users and 2 guests