Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [cdt-dev] Handling huge C/C++ project in eclipse

It's easy to blame Eclipse 4. But I'd like to see some real memory analysis that shows why it's consuming as much as it does. Then we can act on fixing it.

Sent from my BlackBerry 10 smartphone on the Rogers network.
From: kesselhaus
Sent: Sunday, July 20, 2014 6:09 PM
To: cdt-dev@xxxxxxxxxxx
Reply To: CDT General developers list.
Subject: Re: [cdt-dev] Handling huge C/C++ project in eclipse

Hi,

I guess, you are not the only one.

Since Eclipse 4.x hit the ground, Eclipse looks like getting bloated, lacking and consuming much more RAM.
Not everyone has the luck, that he gets that fast machines with lots of RAM.
My working machine e.g. "only" has 8GB of RAM.
But Eclipse isn't the only tool I use, and some other programs also consume 500MB RAM and more.
And more and more programs here are also based on Java, may that be Lotus Notes, MKS Integrity Client, AUTOSAR Configuration tools ...

So, I wonder, what the actual usage of Eclipse should be.
Have several separate Eclipse installations, which have only the plugins needed for the specific usage?
e.g. one for C/C++, one for maybe Modeling tasks, etc. pp.

Or does it depend on the workspace and it's contained projects?
But switching all the time between the workspaces isn't that nice either.

I'm not a Java developer as you might already know, so I also have no clue, what these VM settings actually have what effect.
I only found out, that one should set the Xmx to something like 1024M on Windows. But I still get sometimes an Out-of-Memory, or Heapspace error, and sometimes, Eclipse just needs several seconds before it is actually reacting on actions I do, like calling the context menu on a project or such simple things.

So, what should one do?
I guess, this is not just a problem of CDT itself, but the whole Eclipse zoo of plugins.
What are the intentions also of the Eclipse foundation? Are there any activities now to get the 4.x stream to perform like the old 3.x stream?



Am 26.06.2014 00:29, schrieb Doug Schaefer:
I've been worried that we've increased memory requirements in the last couple of releases. Love to understand why that is once I get it other things out of the way. Boost is the best example.

Sent from my BlackBerry 10 smartphone on the Rogers network.
From: Bill White
Sent: Wednesday, June 25, 2014 2:35 PM
To: CDT General developers list.
Reply To: CDT General developers list.
Cc: Parallel Tools Platform general developers
Subject: Re: [cdt-dev] Handling huge C/C++ project in eclipse

You might think about changing your eclipse.ini file.
I have a much larger project than that.  I have about 6000 files, with about 84 Mbytes,
and 2.5M lines of code, including comments.  The indexer runs on this in about an hour.
I found I ran out of memory with Luna, but when I set these values in my eclipse.ini file,
the indexing seemed to complete.

-XX:MaxPermSize=256m
-Xms1G
-Xmx4G

This seems pretty big, but it does complete.  I don't know if these numbers are sensible or
not.  Perhaps there are better ones.  I mostly just guessed.

I do have a really fast computer, though.


On Wed, Jun 25, 2014 at 2:05 PM, Vishal Gupta <vishal.vit@xxxxxxxxx> wrote:
Thanks Ronald and Doug . 

After reading your comments i am feeling like I am stuck again with my huge code base on which i wan't to use Eclipse in it's original form.

Bringing my huge code base on my local machine or on my remote server is not the problem. The problem arises when the indexing start, even if i assign 4GB of RAM for eclipse still it throws out of memory while indexing (both on my windows and Linux box).  I did try all possible permutation and combinations of allocating memory to eclipse, but no success.

So i thought of doing some weird changes in eclipse.

Main component in eclipse which i want to use is CDT, so my scope was first limited to that.

The two things which i identified is the bottle neck in eclipse for huge code base are
1. Eclipse recursive refresh. Which itself takes long time for a huge code base. And once it completes, it notifies all the resource change listeners. This is when the hell break lose with CDT. 
2. CDT indexer which is the most memory intensive, that too when it tries to index the huge code base in one go, it throws out of memory most of the time. And till the time it's running, the eclipse is not usable at all. Even if it does a complete indexing there is no guarantee that the Eclipse will not throw out of memory (My personal feeling is that CDT maintains a lot of information in memory).

So i put my hack in eclipse code.

1. I stopped the INFINITE_DEPTH refresh in eclipse. 
2. Created a custom recursive refresh call "On demand" (via command). And allowed the Depth One refresh call on Tree expansion from  Explorer.
3. Stopped the recursive refresh calls from CDT components.
4. Plugged in a pre-build index file which i created by running cdt indexer in headless mode on my server via CDT extension point. The pre-build index file size is 4GB.
5. To make call hierarchy and open deceleration usable, i just patched a refresh call for the Bindings found from pre-build index file.   

Cool now i can create a C/C++ project without worrying about the code base size. Project gets created in no time. Eclipse don't give out of memory since it doesn't deal with complete source tree. i can refresh only those resources recursively which i want to look into or want to modify in my code base. And still i can utilize most of the functionality of CDT require during coding.

This is quite an ugly approach. But works for me since the Refresh and Indexing happens on demand.
The major drawback is that i have modified the Eclipse Refresh mechanism. 

So i was looking into Remote project, but again it has its own problem as mentioned in the mail thread below. Synchronize Project will again bring back the complete source tree indexing (in one go) back to my local machine and for sure Eclipse will give me out of memory.

I have changed the subject of the mail and have put cdt-dev mailing list also in the "To" list if anyone can suggest me something cleaner.

Thanks,
Vishal


On Wed, Jun 25, 2014 at 2:20 AM, Doug Schaefer <dschaefer@xxxxxxx> wrote:
This has been a very interesting discussion. It certainly reminds us why the remote project support was added in the first place. I personally think that it's such a hard problem, there probably isn't a good solution for hooking up an IDE to such an extremely large code base located on a remote machine, as much as we want there to be.

But one take away I have that might make synchronized projects work out is that you're desktop probably has as much power as the server you're running the indexer on, especially since the indexer is single threaded. It'll probably take the same amount of time to index on either and then you're left with the remote tooling doing expensive look ups on the server and the latency in passing results back to the workstation. I'd be excited to hear if there was much success using it that way and kill my doubt.

Also, Gigs of source isn't such a big thing any more when we have terabyte drives in our laptops. It may be an expensive first time configuration to get all that copied over to the workstation, but if the synchronizer has done it's job well, things should be very fast after that.

Anyway, just another opinion for the pile. You get to choose your path, but I'm afraid the remote project one isn't well travelled and I'm not as confident on where it leads.

Doug.



From: ptp-dev-bounces@xxxxxxxxxxx [ptp-dev-bounces@xxxxxxxxxxx] on behalf of Roland Schulz [roland@xxxxxxx]
Sent: Tuesday, June 24, 2014 4:41 PM
To: Parallel Tools Platform general developers
Subject: Re: [ptp-dev] Remote Project

I recommend to use synchronized project and tune the local indexer. See e.g. http://stackoverflow.com/questions/9565125/whats-the-recommended-eclipse-cdt-configuration-for-big-c-project-indexer-ta for tips of how to do that.

Roland


On Tue, Jun 24, 2014 at 11:54 AM, Vishal Gupta <vishal.vit@xxxxxxxxx> wrote:
Ideally telling Yes i want to get the Indexing done on the server. 
And i am hopefull that it will ease the load on the local machine.

Yes i want to get everything index, but currently even my smaller project is not getting indexed


On Tue, Jun 24, 2014 at 9:11 PM, Roland Schulz <roland@xxxxxxx> wrote:
Hi,

do you want to use the index? Do you need everything indexed?

Roland


On Tue, Jun 24, 2014 at 8:22 AM, Vishal Gupta <vishal.vit@xxxxxxxxx> wrote:
Hello Greg,

Thanks for the feedback.

Let me give you a background why i thought of giving a try to the remote project.

1. I have a very huge code base ( in GBs)
2. Using Eclipse to import the complete code base on the local machine is crazy. Eclipse can't withstand such a huge code base and the indexer makes things non-usable.
3. So the solution was to checkout the code base on server and I tried to use the Remote Project to access it. The benefit i thought was that the indexing (which is the most memory intensive operation will happen on the server).

Now as you are suggesting to use the Synchronized project, my biggest nightmare "indexing" will again be happening on the local machine. Which will again make eclipse non-usable for my code base. Another thing is to sync the code base initially, my personal feeling is that it will take ages to do initial syncing between geographic different location.

I am very much open to  contribute code if you people can guide me to a solution.

Thanks,
Vishal



On Tue, Jun 24, 2014 at 5:16 PM, Greg Watson <g.watson@xxxxxxxxxxxx> wrote:
Hi Vishal,

Many of the problems you describe are also apparent even when accessing a remote project in the same geographic location, or even the same local network, and tend to be inherent in the nature of remote projects. It may be possible to speed up directory listings as you suggest, but this will not do much to speed up the overall performance of remote projects.

As a result of these issues, we have transitioned to another type of project called synchronized projects. These projects maintain both a local and remote copy of the source files. Apart from the time it takes to do the initial synchronization, these projects tend to work much faster with Eclipse, since the indexer and other expensive operations only happen on the local copy of the files.

Synchronized projects are available in both the Kepler and soon-to-be-released Luna versions of Eclipse for Parallel Application Developers package.

Ongoing development of remote projects has pretty much ceased, so it is unlikely there will be any future updates unless you would like to contribute something.

Regards,
Greg

On Jun 24, 2014, at 6:59 AM, Vishal Gupta wrote:

> Hello ,
>
> I have just started using Remote Project.
> I am accessing my project in a geographically different location. (Remote machine the server is in USA and local machine is in India)
> My initial impressing is that it's very slow to access even a small
> project with 1000 files (around few MB). I am using the "remote tool" configuration
> to access my project on remote machine.
> I tried looking into it and my initial finding is :
>
> Project Creation itself take a long time as the .project file and .setting folder is created and updated on the remote machine. Could it be possible to create the .project on the local machine ?
> I think the same happens with RSE connections  where a hidden project is created on the local system for the connections and the configuration files are stored in it.There may be better solution for sure.
>
> For me the refresh of the files and folder is also very slow, my initial feeling is that the "ls" operation via sftp connection is taking more time. Any possibility to speed it up?
>
> The Indexing for my CProject is not working. The PDOM files is getting created on the remote system but the indexing is not happening. Do i need to do anything extra ?
> I checked the status of the Remote Indexing operation and it only had info and no error.
> I am still digging into the problem. Just wanted to check if i have missed anything ?
>
>


Back to the top