Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
[platform-core-dev] Re: JobManager


When you say that a search query runs concurrently, do you mean at the same time as an indexing job?  I thought the search query would wait until all indexing jobs completed before starting.  I believe there are equivalent mechanisms in the core job manager that can do what you want:

IJob.ForceImmediate:
 - pause waiting indexing jobs using the API: IJobManager.sleep(Object family), where "family" is some unique identifier for which all indexing jobs respond true in their Job.belongsTo() method.  The family idea is taken straight from the JDT, although the family identifier was changed from String to Object to allow more flexibility.
- wait until all running indexing jobs have completed using IJobManager.join(family)
- schedule the search job
- resume all indexing jobs (IJobManager.wakeUp(family)).

IJob.CancelIfNotReady
- check if there are any running indexing jobs using IJobManager.find(Object family).
- cancel or proceed as applicable.

IJob.WaitUntilReady
- wait until all running indexing jobs have completed using IJobManager.join(family)
- schedule the search job

Job scheduling rules can also be used to ensure that indexing jobs don't run concurrently with each other or with search jobs.  See ISchedulingRule and Job.get/setRule for more info.

I did notice you use a multiple reader policy on your locks.  My thinking was that a multiple reader lock could be built on top of the primitive mutex lock in the API, although I haven't tried this.  My hope was to avoid adding complex locking mechanisms in the API, with the idea that if a primitive lock is given, it can be used as a foundation for more complex strategies.

My main concern about the index manager is the granularity of jobs.  I don't think the core IJobManager will scale well if all of its users are enqueuing hundreds or thousands of jobs.  I believe you currently break up project indexing into separate jobs for each compilation unit in the project.  I was wondering if it would be possible to avoid this breaking up to help keep the number of jobs low.  Was there a reason for doing it this way (I'm thinking it makes pre-emption attempts more responsive)?



Philippe P Mulet

07/04/2003 02:57 PM

       
        To:        platform-core-dev@xxxxxxxxxxx
        cc:        
        From:        Philippe P Mulet/France/IBM@IBMFR
        Subject:        JobManager


Glancing at the spec of IJobManager, I do not see how it could be used to perform a search query (job) concurrently with the job manager...

Typically, our JDT/Core job manager is used to drive the background indexing effort, and search queries can be iniated concurrently. If we migrate the background indexing story to the platform JobManager, then we still need to be able to initiate search action concurrently.

Do you imagine these to be posted as subsequent jobs ? If so, then there should be a way to have jobs be notified of progress of their prerequisites (typically a search query can only perform once indexing jobs got completed. Note that when clients are awaiting, we also pump up the background thread priority; this was a fairly good speed improvement.

Also, we sometimes need to know whether the indexer is ready to perform an operation (CancelIfNotReady), what is the equivalent of this ?

On the locking front, we had to define a read/write locking mechanism (multiple readers, one writer) to protect our accesses to our indexes. Was wondering if your generic locking story shouldn't be a little more permissive than just mutex behavior.


Back to the top