Bug 215936 - "loading files of data set cache" exception thrown under multi-threads
Summary: "loading files of data set cache" exception thrown under multi-threads
Status: RESOLVED FIXED
Alias: None
Product: z_Archived
Classification: Eclipse Foundation
Component: BIRT (show other bugs)
Version: 2.3.0   Edit
Hardware: PC Windows Server 2003
: P2 normal (vote)
Target Milestone: 2.3.0 M6   Edit
Assignee: Lin Zhu CLA
QA Contact:
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2008-01-21 00:06 EST by Wu Yu CLA
Modified: 2008-03-25 06:11 EDT (History)
3 users (show)

See Also:


Attachments
crosstab report design file: sample(olapxtab).rptdesign (37.68 KB, application/octet-stream)
2008-01-21 00:09 EST, Wu Yu CLA
no flags Details

Note You need to log in before you can comment on or make changes to this bug.
Description Wu Yu CLA 2008-01-21 00:06:32 EST
This exception occured during the execution of DTE in DIRECT_PRESENTATION mode.

Test Description:
Task: runandrender
threads: 4
application server: tomcat


Exception details:

SEVERE: An exception occurred during processing. Please see the following message for details:
There is an error in loading files of data set cache.
There is an error in loading files of data set cache.
org.eclipse.birt.report.data.adapter.api.AdapterException: An exception occurred during processing. Please see the following message for details:
There is an error in loading files of data set cache.
There is an error in loading files of data set cache.
	at org.eclipse.birt.report.data.adapter.impl.DataRequestSessionImpl.prepare(DataRequestSessionImpl.java:479)
	at org.eclipse.birt.report.engine.data.dte.AbstractDataEngine.doPrepareQuery(AbstractDataEngine.java:179)
	at org.eclipse.birt.report.engine.data.dte.AbstractDataEngine.prepare(AbstractDataEngine.java:160)
	at org.eclipse.birt.report.engine.executor.ReportExecutor.execute(ReportExecutor.java:101)
	at org.eclipse.birt.report.engine.internal.executor.wrap.WrappedReportExecutor.execute(WrappedReportExecutor.java:59)
	at org.eclipse.birt.report.engine.internal.executor.dup.SuppressDuplciateReportExecutor.execute(SuppressDuplciateReportExecutor.java:51)
	at org.eclipse.birt.report.engine.internal.executor.wrap.WrappedReportExecutor.execute(WrappedReportExecutor.java:59)
	at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.doRun(RunAndRenderTask.java:136)
	at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.run(RunAndRenderTask.java:66)
	at org.eclipse.birt.report.service.ReportEngineService.runAndRenderReport(Unknown Source)
	at org.eclipse.birt.report.service.ReportEngineService.runAndRenderReport(Unknown Source)
	at org.eclipse.birt.report.service.BirtViewerReportService.runAndRenderReport(Unknown Source)
	at org.eclipse.birt.report.service.actionhandler.BirtGetPageAllActionHandler.__execute(Unknown Source)
	at org.eclipse.birt.report.service.actionhandler.AbstractBaseActionHandler.execute(Unknown Source)
	at org.eclipse.birt.report.soapengine.processor.AbstractBaseDocumentProcessor.__executeAction(Unknown Source)
	at org.eclipse.birt.report.soapengine.processor.AbstractBaseComponentProcessor.executeAction(Unknown Source)
	at org.eclipse.birt.report.soapengine.processor.BirtDocumentProcessor.handleGetPageAll(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:585)
	at org.eclipse.birt.report.soapengine.processor.AbstractBaseComponentProcessor.process(Unknown Source)
	at org.eclipse.birt.report.soapengine.endpoint.BirtSoapBindingImpl.getUpdatedObjects(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)...

Caused by: org.eclipse.birt.data.engine.core.DataException: There is an error in loading files of data set cache.
There is an error in loading files of data set cache.
	at org.eclipse.birt.report.data.adapter.impl.DataRequestSessionImpl.materializeCube(DataRequestSessionImpl.java:542)
	at org.eclipse.birt.report.data.adapter.impl.DataRequestSessionImpl.prepare(DataRequestSessionImpl.java:947)
	at org.eclipse.birt.report.data.adapter.impl.DataRequestSessionImpl.prepare(DataRequestSessionImpl.java:470)
Comment 1 Wu Yu CLA 2008-01-21 00:09:59 EST
Created attachment 87373 [details]
crosstab report design file: sample(olapxtab).rptdesign
Comment 2 Lin Zhu CLA 2008-01-30 01:18:38 EST
Only reproducible in specific tomcat environment.Need more time to investigate.
Comment 3 xiaofeng zhang CLA 2008-02-15 02:39:34 EST
JVM level is used to enhance performance during creating a cube. But this stratagem may bring problem in Multi-Threading environment.
Comment 4 Wenfeng Li CLA 2008-02-15 16:59:49 EST
(In reply to comment #3)
> JVM level is used to enhance performance during creating a cube. But this
> stratagem may bring problem in Multi-Threading environment.
> 

What kind of data cache is used in cube creation?  Is there any data cache that is shared among different engine runtasks?

I suggest we avoid any cache at DtE level. All caching should be only within in one engine run task.  Sharing data cache beyond one engine task would be   have multi-threading issue as well as a side effect of creating a residual caches after a task is over.   
Comment 5 Lin Zhu CLA 2008-02-17 21:05:43 EST
(In reply to comment #4)
> (In reply to comment #3)
> > JVM level is used to enhance performance during creating a cube. But this
> > stratagem may bring problem in Multi-Threading environment.
> > 
> 
> What kind of data cache is used in cube creation?  Is there any data cache that
> is shared among different engine runtasks?
> 
> I suggest we avoid any cache at DtE level. All caching should be only within in
> one engine run task.  Sharing data cache beyond one engine task would be   have
> multi-threading issue as well as a side effect of creating a residual caches
> after a task is over.   
> 

In current DtE cube creation implementation, a cross-engine task cache will be used to cache the Data Set data that need to be used multiple times during cube creation. This is the source of problem.

To solve this bug, we would like to create engine task specific caches.So that each cache only serves one and only one engine.

The cache needs to be done in DtE level, for engine does not know data set data info (engine only knows Result Set data info).
Comment 6 Wenfeng Li CLA 2008-02-18 22:11:45 EST
Agree comment #5.  How about introduce a workspace concept for DtE? Engine task can manage the lifecycle of this workspace, and DtE can create and destroy cache in the workspace?  A workspace is only used by an engine task.
Comment 7 Lin Zhu CLA 2008-02-19 01:12:11 EST
(In reply to comment #6)
> Agree comment #5.  How about introduce a workspace concept for DtE? Engine task
> can manage the lifecycle of this workspace, and DtE can create and destroy
> cache in the workspace?  A workspace is only used by an engine task.
> 

At current stage for each engine task only one data engine is created. So the engine task and data engine instances has a 1:1 relationship.As far as this 1:1 relationship persistent, we may not necessary to introduce the workspace idea.

In case that each engine task includes multiple data engines (although this has never been done by engine, we do have not prohibit this from API perspective), a workspace idea would be good. All the data engines that are from one engine task can share same workspaces.

Not sure whether the 1:n relationship will be implemented by engine in future.If that is possible then the workspace idea helps.
Comment 8 Wei Yan CLA 2008-02-19 01:22:46 EST
In future, multiple task may share one data engine but each task should have its own session, so we may have two level of cache:

data engine level. The case is used by all task executed in the same JVM.
session level. The cache is used by the dataset executed in one task.

The data engine level cache should be thread safe, while the session level needn't.
Comment 9 Lin Zhu CLA 2008-03-25 06:11:17 EDT
At current stage one data engine level cache is enough to resolve the problem. I've modified the code so that the data engine level cache rather than JVM level cache is used when create a cube. The bug is so that resolved.