Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [eclipselink-users] Lazy loading scenario

OK, with some help from Tom's comment on the @ManyToOne thread, my problem's been sorted as per the below. These notes are a bit rough, really for my own benefit, but maybe they'll help someone else. Also they do show the JPA 1 v 2 difference.

Problem: Detached ALL entities from persistence context, then merge() each Recording entity so can now load it's recording data. The associated EventTypes (@ManyToOne) are loaded and therefore clear()d from the context at the same time as the Recordings, without merge() being called on them. The selected EventType for a Recording is changed after the merge is done. In JPA 1.x, this was OK. In JPA 2.x the EventType instance is still detached, so a new one is created and primary key uniqueness error thrown.

Possible fixes:

1. Set @ManyToOne's cascade=CascadeType.MERGE. Make reference change THEN merge() Recording. This works, except for the fact that Recording must be merge()d earlier so that recording data can be loaded in. Reference changes cannot happen before the merge() so not an option for me.

2. Load in ALL reference entities AFTER the clear() call. Then manually call detach() on each Recording as it is finished with to remove it from the persistence context. This works because all reference entities are still attached to the context, so they don't need merge()ing. - This needs to be repeated when filter changed (new Recordings loaded), so not an option (would have to reload all reference entities also).

3. As now, but before each Recording is committed, check first that any related entities are also in the persistence context. Eg. if the related EventType isn't, merge() it. - Challenges: how to spot every single one; is there a merge() variant/arg to specify to force merging all related (as can't specify this on entity, see 1.)?

4. After loading in Recordings, iterate through all Recordings, detach()ing each from context, then merge()ing as each is needed. Looks most promising. Handles both init and changed filter. - CHOSEN SOLUTION

NOTE: detach() is new in JPA2, so wasn't an option before. remove()d objects cannot be subsequently merge()d or find()d. detach()ds can :)


One other thing I'm trying to track down, is whether I can specify that my query loads in my Recording objects already detached, saving the brute-force iterate-all-and-detach()-them approach?

Cheers
Brad


On 6/04/11 10:48 PM, Brad Milne wrote:
My issue is also with a @ManyToOne mapping as per another question being asked: http://dev.eclipse.org/mhonarc/lists/eclipselink-users/msg06082.html

Could be the same thing...


On 6/04/11 10:15 PM, Brad Milne wrote:
I've managed to duplicate this exact thing in a simple example project. I zipped it up, including all dependencies (uses postgres, but I'm sure another db can be swapped in). The Eclipse project is set up to run with JPA2.x and the error can be seen. The jars (and persistence.xml) are also there to run under jpa1.x and it can be seen that it passes. This well describes the bug I now see after my upgrade.

I tried a find() in there instead of a merge() as James suggested, but made no difference. I'll try individual entity merge()s next, but that seems very brittle.

The project can be downloaded from:
http://kvisit.com/SpbKmAQ  (10MB)
(please let me know if any problems accessing it)

I appreciate any and all help!
Brad


On 2/04/11 7:48 AM, Brad Milne wrote:
Thanks James

Just working on a simple example project to try to duplicate more simply.

Cheers
Brad


On 25/03/11 6:28 AM, James Sutherland wrote:
Odd, can't see why an existing object would be inserted form a merge().  What
is the related object and how is it mapped?

Why are you calling merge(), and you updating the object?  Are you creating
new objects as well, or associating new objects with the existing object?

This may be related to bug,
https://bugs.eclipse.org/bugs/show_bug.cgi?id=340802

Try using find() to find the object first, or merging each object one by one
not using cascade on the merge.


Brad Milne wrote:

     Hi all

     I have a user screen (thick client; Swing) that presents a list of
     sound recording entities, including metadata such as date and time.
     These range up to 211kb in size (average 30kb) and can be up to 3k
     of them presented at a time. I can't fully load them in one go as
     get OOM errors. To solve this, I use lazy loading (@Basic(fetch =
     FetchType.LAZY)) on the sound data attribute of the entity. I've
     recently upgraded to eclipselink 2.1.2 from 1.0.2 and am having some
     strange behaviour.

     What happens in the code is this:
     Load:
  Get all records (sound data is lazy loaded so memory minimal)
  entityManager.clear() to detach all records from the db
     User selects a record:
  entityManager.merge() the selected recording, inserting it in
     place of the detached object in the list
  User listens to the recording which fetches the sound data
     User selects a different recording:
  entityManager.clear() to detach all records from the db - yes,
     again. If I didn't do this then all recordings are loaded by the
     time the screen has been completed and the same memory issues are
     present.
  entityManager.merge() the selected recording, inserting it in
     place of the detached object in the list
  User listens to the recording which fetches the sound data
     and so on...

     This worked fine in 1.0.2. It felt clunky, but seemed to be OK.

     In 2.1.2 it is not working. Looking into it, it seems to be trying
     to insert a new instance of a related singleton reference object,
     for which a duplicate ID exception is generated. My interpretation
     is that the clear() is detaching all entities, then a new
     one is being created for the update (commit). This seems perfectly
     reasonable, but it breaks my current implementation, and didn't
     happen in 1.0.2.

     So 1) Is there a way to detach individual entities (or their fields)
     after they have been accessed/modified/then committed?
     2) Does somebody know what changed that affected this? Was it a
     regression that's since been fixed, maybe?
     2) Or should I be completely rethinking the pattern I'm using here?

     Thanks a lot in advance
     Brad



-----
http://wiki.eclipse.org/User:James.sutherland.oracle.com James Sutherland
http://www.eclipse.org/eclipselink/
  EclipseLink ,  http://www.oracle.com/technology/products/ias/toplink/
TopLink
Wiki:  http://wiki.eclipse.org/EclipseLink EclipseLink ,
http://wiki.oracle.com/page/TopLink TopLink
Forums:  http://forums.oracle.com/forums/forum.jspa?forumID=48 TopLink ,
http://www.nabble.com/EclipseLink-f26430.html EclipseLink
Book:  http://en.wikibooks.org/wiki/Java_Persistence Java Persistence
Blog:  http://java-persistence-performance.blogspot.com/ Java Persistence
Performance

Back to the top