Hi Christian,
Regarding fetching the scripts, when user initially sets a remote location (say GitHub), its domain is identified from the address and the scripts are fetched to the local machine. I have created a
class for fetching those scripts from GitHub repositories. As of now it requires one API call for each scripts it fetches but it can be reduced to one API request to fetch all scripts from a given repository on GitHub. The number of API requests do matter here as GitHub provides a maximum of
5000 requests per hour for an authenticated user and 60 for an unauthenticated user for requests regarding content. So if the corresponding address is identified, it makes sense to fetch Git credentials stored in Eclipse and if not, ask the user for the credentials.
So, instead of extending http parser as pointed out
here, wouldn't it be better to use a separate class that parses the location using web API as HttpParser would recursively look for anchors to fetch download urls for raw files and GitHub provides a JSON response of the recursive tree (containing urls for all the files and directories in the repository) for a particular repository in one request ? Can we consider this as a legit approach ?
In case of gerrit and svn, a different approach might be required.
Thanks again for helping
Utsav Oza