Community
Participate
Working Groups
Build 20030611 I was running the JDT/Core test under the debugger while Norton Antivirus was also running. At one points, the tests hang. The indexer was in an infinite loop: it was trying to index D:/eclipse/lib/lib27.jar (which doesn't exist).
Problem comes from the (external) JAR indexing job which will consistently fail indexing the missing JAR, due to IOException. Thus it keeps discarding all it knows about the index, which will cause the index to be reacquired later on, and fail once again in the subsequent iteration (for same reason). Proposed workaround is to check for ZipException, if so, then pretend the index is ok (empty). This means that missing/invalid archives are treated as if empty. Ideally, the IOExceptions should be narrowed to offending code (when opening ZIP vs. when saving index). Wondering if the same protection shouldn't be added to other kinds of resources as well. Note that an alternative existency check could come into play to deal with this, but wouldn't capture the possible infinite regression raised by corrupted JARs.
Created attachment 5207 [details] Proposed patch for AddJarFileToIndex.execute()
The reason why the tests are failing is that the lib27.jar is being requested for indexing, but before it gets there, the file is removed by the test itself (indexer is behind).
But how does that cause an infinite loop? The job is deleting the index but who is constantly asking for the index? If the test that deleted the jar file is done, who is left who cares about it? I propose this change to the execute method: try { if (resource != null) { IPath location = this.resource.getLocation(); if (location == null) return false; if (JavaModelManager.ZIP_ACCESS_VERBOSE) System.out.println("(" + Thread.currentThread() + ") [AddJarFileToIndex.execute()] Creating ZipFile on " + location); //$NON-NLS-1$ //$NON-NLS-2$ zip = new ZipFile (location.toFile()); zipFilePath = (Path) this.resource.getFullPath().makeRelative(); // absolute path relative to the workspace } else { if (JavaModelManager.ZIP_ACCESS_VERBOSE) System.out.println("(" + Thread.currentThread() + ") [AddJarFileToIndex.execute()] Creating ZipFile on " + this.indexPath); //$NON-NLS-1$ //$NON-NLS-2$ zip = new ZipFile (this.indexPath.toFile()); zipFilePath = (Path) this.indexPath; // path is already canonical since coming from a library classpath entry } } catch (IOException e) { // if zip couldn't be found or read properly; replace the index in the cache with an empty one manager.recreateIndex(this.indexPath); return true; }
What we observed was: - AddJarFileToIndex.execute(...) called getIndex(indexPath, false, false) - IndexManager.getIndex(...) didn't have an in-memmory index (indexes.get(Path) returns null). - symatrically its state was UNKNOWN, this caused a call to rebuildIndex(...) - rebuildIndex(...) changed the state to 'REBUILDING_STATE' and posted another AddJarFileToIndex job for the same jar - back in AddJarFileToIndex.execute(...), it tried to open the jar and failed with an IOException. - the catch IOException block removed the index and its state: it was back to UNKNOWN - when ran, the second posted job did the same, thus the infinite loop
That cannot happen since every update job calls aboutToUpdateIndex which makes sure that the state is changed from UNKNOWN to UPDATE or REBUILD, so getIndex only sees indexes in an UNKNOWN state from query jobs.
Put in protection for Rebuild jobs to ensure they do not cause another rebuild job to be added when they call getIndex IFF the index is no longer in a Rebuild state.
Verified.