Community
Participate
Working Groups
master ImageDataProvider/ImageFileNameProvider#getImageData(200) is called when an Image is created on a non-retina display. The idea of the callback is to delay the creation of the image representation until it's actually required. At least on Cocoa, the current implementation still incurs the overhead it was supposed to avoid.
We tried to make use of platform's capabaility of image selection where ever it is supported by the platform. In Mac we create the Image with multiple representations so that the platform takes care of rendering the Image based on DPI. In other platforms SWT decides the selection of images. So we can have lazyloading here. But in case of Mac it is the Mac OS that decides which representation to load so We donot have a call back from Mac when it is about to load. Thats the reason we had to add the representations during the creation for Mac. Please let us know your views on this
Can't you implement a subclass of NSCustomImageRep that performs the loading lazily when the other representation is about to be used?
We went ahead with this implementation based on the following conversation This was the suggestion we have given on March 11, 2015 We will not be able to use native facilities available in Mac in proposed solution. In the current implementation we are relying in the OS capabilities this we need to rethink. Mac requires the Image files existing in the disk for dynamic loading so we have create the representation upfront and add it to the NSImage class so that whenever there is a change in the dpi we can switch the representation Your response is Yes, sounds good. OS X's native facilities are optimized for the use case where resources are included as files in the *.app folder. This happens rarely for SWT clients, since Java code and resources are typically shipped in JARs. Based on this we went ahead with the current implementation. I will investigate further on how we can use NSCustomImageRep. Here is what I am thinking on how to tackle this In Image constructor Create NSCustomImageRep using the ImageData or filename with a drawSelector function Create NSImage object with NSCustomImageRep When the draw selector function is called we get the ImageRep to be rendered Check the current scaling factor of the monitor compute the scaling factor of the ImageRep based on size and pixelhigh if both are same render the ImageRep else Get the ImageRep using hidpi api add this ImageRep to NSImage render the Image Representation Here are the some of the problems I think that may occur 1. Need to write and maintain the rendering code. We need to do the actual drawing. This might be complex and can cause performance overhead 2. We are not using the facilities provided by the OS X with this implementation like selection of image rep selection and rendering. 3. In the examples I have seen, NSCustomImageRep is used when the user had few lines/arcs to draw and creating an image might be expensive. In our case we need to provide the rendering logic based on the data we receive. Please let me know your views on this
I'm not too familiar with the Cocoa APIs. But from what I saw, I think this should be the strategy: - in the new Image constructors, create an NSImage with two NSImageRep - the rep with the right resolution for the current screen can be loaded right away though the usual APIs - the other rep should be an NSCustomImageRep, or maybe another subclass of NSImageRep. It should get the correct metadata (pixel size), but its contents should only be loaded/drawn when the NSImage has actually decided to use that representation.
According to the following reference https://developer.apple.com/library/mac/documentation/Cocoa/Reference/ApplicationKit/Classes/NSCustomImageRep_Class/index.html The NSCustomImageRep is used when you need to render a image in custom format. This uses a delegate which does the actual drawing. So when we use the NSCustomImageRep we need to write the actual Image rendering logic (which I would like to avoid). So We need to think of in a different direction to resolve this
In the new Image constructors on Mac we get the filename or ImageData for zoom 200 upfront. That is required to add the multiple representations to the Cocoa NSImage. We don't load multiple images during Image construction. The lazy loading of the correct image for the current zoom level is taken care of by Cocoa.
(In reply to Lakshmi Shanmugam from comment #6) > We don't load multiple images during Image construction. For Image(Device, ImageDataProvider), I don't see anything that would be delayed. "imageDataProvider.getImageData (200)" is always called. To create the ImageData, the client code has to load the image directly (not lazily). The Image#createRepresentaion(ImageData) method completes the loading of the image. For Image(Device, ImageFileNameProvider), the client has to ensure the image file is available on the file system when the "getImagePath (200)" callback returns. => The client can't extract the image file lazily. Then, SWT calls NSImageRep#imageRepWithContentsOfFile(NSString) which AFAICS also reads the image file immediately. => Both APIs read and parse the image contents for zoom==200 all the time. Maybe the very last step of the loading is delayed by the OS, but that's not the expensive part.
Created attachment 256743 [details] work in progress patch-1 The getImageData(200) will be called only when a high resolution monitor has been attached to the display. And this happens dynamically using the OS notification. I posted the patch here to give a general idea of the approach. There is more work to be done.
Need to decide on what getImageData() should return. Moving to M4.
New Gerrit change created: https://git.eclipse.org/r/59530
(In reply to Eclipse Genie from comment #10) > New Gerrit change created: https://git.eclipse.org/r/59530 I don't really like that approach. It doesn't change anything for users with a HiDPI monitor, which will soon be the majority of users. For others, it adds an SWT.Settings listener for each individual image. And each of these callbacks will call isHighResoultionMonitorAttached() [sic!] again. That looks quite inefficient, and the tons of listeners would make debugging a pain. And the listeners would also need to be removed after a HiDPI monitor has been added, and when an image gets disposed.
(In reply to Markus Keller from comment #11) > (In reply to Eclipse Genie from comment #10) > > New Gerrit change created: https://git.eclipse.org/r/59530 > > I don't really like that approach. It doesn't change anything for users with > a HiDPI monitor, which will soon be the majority of users. The patch addresses the issue of getImageData(200) being called on non-hidipi monitors. For hidipi monitors the behavior remains the same. > > For others, it adds an SWT.Settings listener for each individual image. And > each of these callbacks will call isHighResoultionMonitorAttached() [sic!] > again. That looks quite inefficient, and the tons of listeners would make > debugging a pain. And the listeners would also need to be removed after a > HiDPI monitor has been added, and when an image gets disposed. Using the SWT.Settings listener is a temporary change to get a callback when the display settings change. I added it to demonstrate the approach and get feedback. The final fix would use the OS notification 'NSApplicationDidChangeScreenParametersNotification' directly. I'm currently working on this.
Moving to M5.
Moving to M6.
Will look at it for RC1.
(In reply to Lakshmi Shanmugam from comment #12) >The final fix would use the OS notification > 'NSApplicationDidChangeScreenParametersNotification' directly. I'm currently > working on this. Couldn't get this to work as expected. Will investigate this further for 4.6.1.
I might spend some time on this since Lakshmi is away but that won't happen immediately, deferring to 4.6.2 for now.
Moving to 4.6.3 as this is still WIP.
Moving to 4.7 as the fix is still not complete and won't make it into 4.6.3.
I don't think a Target Milestone of 4.7 is realistic. Moving to 4.8. With more and more Retina displays in the market, the problem is actually two-fold now: 1. Don't call getImageData(200) unless a Retina monitor is active 2. Don't call getImageData()/getImageData(100) unless a normal monitor is active (In reply to Sravan Kumar Lakkimsetti from comment #5) > According to the following reference > https://developer.apple.com/library/mac/documentation/Cocoa/Reference/ApplicationKit/Classes/NSCustomImageRep_Class/index.html > > The NSCustomImageRep is used when you need to render a image in custom > format. This uses a delegate which does the actual drawing. Yes. However, the goal of using a NSCustomImageRep is not to do our own drawing, but to have a place where we can intercept the OS's request to draw an image. When the OS has decided on a representation it wants to draw, then we need a callback that asks the ImageDataProvider/ImageFilenameProvider for the actual version. Once we know which file/ImageData to load, we can delegate to the existing NSImageRep classes that know how to load and draw an image rep. Maybe the implementation problem is similar to this: (In reply to Lakshmi Shanmugam from bug 467205 comment #7) > [..] That requires finding a way to use a > objective-c block from Java code for the completion handler. Or maybe there's another way how we can subclass NSCustomImageRep and insert the callbacks there.
Moving to 4.9, please re-target as required.
Resetting target.