Table with 300 million records

Tuning de Banco, Tuning de SQL, Ferramentas de tuning
Post Reply
dejambo
Rank: Programador Sênior
Rank: Programador Sênior
Posts: 68
Joined: Wed, 27 Jun 2012 8:58 am

Good afternoon people.

I have a table with more than 300 million records and there is a query that uses this table to generate a report. The problem is that this query is taking too long and consuming lots of BD features. I'vê tried to perform some tuning alternatives in this appointment but I could not improve it. I'm without alternatives, would you have any tips in this case? Would you partition this table would be a solution?

Thanks.
nor.dbajunior
Rank: Estagiário Sênior
Rank: Estagiário Sênior
Posts: 9
Joined: Thu, 02 Dec 2010 10:32 pm
Location: cuiaba

A solution when the table is large, you can partition the table,
if there is a field with date, year, for example you can partition per year.
User avatar
dr_gori
Moderador
Moderador
Posts: 5024
Joined: Mon, 03 May 2004 3:08 pm
Location: Portland, OR USA
Contact:
Thomas F. G

Você já respondeu a dúvida de alguém hoje?
https://glufke.net/oracle/search.php?search_id=unanswered

Exactly.
Another way is to make the efficient use of nulls.
Remember that NULL fields are not inserted in the indexes.

An example of what I am talking about:
Let's say you want to recover all the still unprocessed lines from a table. (Table with 2 billion rows, and the rows that you want to recover are 200 or 300).
In this case, you create a Status field and leaves this null field when the record has already been processed.
and leaves 'P' for the registry you want to process. Hence, just create an index in this field.
That is, as nulls will not enter the index, this query will be super fast because in the index will have only 200 lines. Exactly the ones you need, and it will not be necessary to sweep a giant index of 2 billion lines, nor do Full Scan).
Post Reply
  • Information
  • Who is online

    Users browsing this forum: No registered users and 12 guests