invalidating warranty Updating millions of records in oracle

Hi, I have a table with about 70 million rows, CREATE TABLE `user_data` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `user ID` int(10) unsigned NOT NULL, `user Name` varchar(45) DEFAULT NULL, `location` varchar(60) DEFAULT NULL, `lastupd` bigint(20) unsigned DEFAULT '0', ............................... PRIMARY KEY (`user ID`), KEY `id_idx` (`id`), KEY `lastupd_idx` (`lastupd`), KEY `user Name_idx` (`user Name`) USING BTREE) ENGINE = Inno DB; The table does not have any foreign key relationships and no joins are needed too for our SELECT queries. We have high freq updates/inserts into the above table around 2500 per minute.I want to optimize our INSERT/UPDATEs for the above input rate.

updating millions of records in oracle-29

(DBMS_UTILITY.get_time - l_start)

If you feel a blog entry is inappropriate, click here to notify Toolbox for IT.

Thanks remediator=2E Actually same practice is followed in our old= PL/SQL project to popuate the tables and we are coverting it= into Ab Initio=2E But, i haven't related it to Ab Initio for= performance until now=2E 1) Find records that need to be inserted using comparison between= primary keys (need a join component & reading from data base)= (Action for these records - Insert) 2) Find records that need to be updated (which are filtered by= the last join) and keep primary keys into a seperate file=2E 3) Use primary keys of the records (unless they are slowly= changing dimensions) and run the SQL component to delete those= records from the database along with records to be deleted= (Action - delete) 4) Insert all the remaining records using output table component=2E= (Action - Insert)=2E I like your comment on how API mode affects upload and not= insert=2E "More often than not, the updates are not as significant in= volume as the inserts, so you will only take the penalty for the= Updates, not the Inserts too=2E What penalty is that?

The Update= Table runs in API Mode, in any system five to ten times slower= (or worse) than bulk insert=2E" But, what is internal mechanism that makes Ab Initio lot slower= when it attempts to "update" a record?

This query was the last step in an end-user application upgrade from SQL 2000 to SQL server 2008.

The UPDATE was estimated to take DAYS not HOURS to complete We were called to explore how to make this update run faster.

The UPDATE STATISTICS on our problem table took hrs to complete.