Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [eclipselink-users] Out of memory while commiting large uow

The UnitOfWork does not really have this API, creating a new UnitOfWork per
batch may be best.

Your could create a RepeatableWriteUnitOfWork instance directly, it defines
a writeChanges() and clear(true) method that are used from JPA.



Tabi wrote:
> 
> 	
> Thank you for your reply, 
> 
> can you explain how to use it with the Toplink  API instead JPA API. For
> me, UnitOfWork  interface doesn't have clear () & flush () methods.
> 
> My questions :
> 
> 1 / is it possible to access clear () & flush () methods (of the JPA API)
> from Toplink Session & UnitOfWork API ?
> 
> 2/ is there any plans to publish the clear () & flush () methods  on
> Toplink UnitOfWork ?
> 
> Regards
> Tabi
> 
> 
> 
> James Sutherland wrote:
>> 
>> In JPA and EclipseLink you can use clear() for this.
>> 
>> Calling clear() after calling flush() will free the registered objects
>> and memory.
>> 
>> There is a persistence.xml option "eclipselink.flush-clear.cache" that
>> lets you configure what occurs on a clear() called after a flush().  By
>> default any modified objects will be invalidated in the shared cache.
>> 
>> 
>> 
>> Tabi wrote:
>>> 
>>> Hello, we encounter this problem for years on TopLink then there is not
>>> in Hibernate.
>>> 
>>> The problem is that Toplink awaits the uow.commit () before merging
>>> changes in the cache objects and before release work objects (backup and
>>> clone).
>>> 
>>> If a transaction modify 10.000 objects, TopLink references 30.000
>>> objects (10.000 refCache, 10.000 clone, 10.000 backup). The 30.000
>>> objects will be released once the database commit completed and the
>>> merge completed in memory.
>>> 
>>> We want Toplink offers an option to make several Flush in unitOfWork,
>>> each flush will send SQL to database (insert, update, delete) without
>>> sql commit then mark flushed Objects as invalid before releasing them so
>>> that the garbage collector can garbage flushed objects.
>>> 
>>> Example:
>>> 
>>> uow.begin(); / * begin of transaction on 10.000 objects * /
>>> application modify first 1.000 objects (from 1 to 1.000)
>>> uow.flush () / * Toplink send sql to database, toplink flag 1.000 as
>>> invalid, Toplink release 1.000 objects to the GC * /
>>> 
>>> application modify next 1000 objects (from 1.001 to 2.000)
>>> uow.flush () / * Toplink send sql to database, toplink flag 1.000 as
>>> invalid, Toplink release 1.000 objects to the GC * /
>>> ....
>>> 
>>> application modify last 1000 objects objects (9001 .. 10000)
>>> uow.flush () / * Toplink send sql to database, toplink flag 1.000 as
>>> invalid, Toplink release 1.000 objects to the GC * /
>>> 
>>> uow.commit(); / * Toplink send sql-commit to database, no objects to
>>> release to GC* /
>>> 
>>> Can you tell me if you intend to implement this function because we need
>>> long running transaction process without outOfMemory suffer?
>>> 
>>> Thank you for your help
>>> Regards
>>> Tabi
>>> 
>> 
>> 
> 
> 


-----
---
http://wiki.eclipse.org/User:James.sutherland.oracle.com James Sutherland 
http://www.eclipse.org/eclipselink/
 EclipseLink ,  http://www.oracle.com/technology/products/ias/toplink/
TopLink 
Wiki:  http://wiki.eclipse.org/EclipseLink EclipseLink , 
http://wiki.oracle.com/page/TopLink TopLink 
Forums:  http://forums.oracle.com/forums/forum.jspa?forumID=48 TopLink , 
http://www.nabble.com/EclipseLink-f26430.html EclipseLink 
Book:  http://en.wikibooks.org/wiki/Java_Persistence Java Persistence 
-- 
View this message in context: http://www.nabble.com/Out-of-memory-while-commiting-large-uow-tp19967111p19994724.html
Sent from the EclipseLink - Users mailing list archive at Nabble.com.



Back to the top