Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [ease-dev] Introduction

Hi Christian,

If the user wants to edit the script fetched from remote repository from the Script Explorer, how those changes should be synchronized considering the script on the remote repository remains unchanged? I suppose we might then need to allow users to save and commit those changes depending on the system (eg. GitHub, Gerrit, or SVN) locally and push it to their remote repository. Can we consider this as a way to synchronise the edited scripts to their respective remote locations ? 

And if the local and remote scripts are changed simultaneously from different sources, while executing should the preference be given to the changes made to remote scripts than the those made to local scripts, (thereby discarding the local changes made to the scripts) because user may want to test his/her changes before he wants an updated copy from remote ? If this is the case then a typical flow to try out local changes made to the scripts from script explorer would be: 
1. make some changes to the scripts locally
2. commit and push it to remote location 
3. execute it from script explorer (that fetches fresh scripts when it is executed if some new commit hashes are found on those scripts)

This might apply even for small changes made from script explorer. Please do correct me if I am wrong.

Basically, since we are saving the external script files on our own (for filesystem and workspace locations), same thing should apply to the remote locations.

Thanks for helping
Utsav Oza

On Thu, Mar 9, 2017 at 9:37 PM, <Christian.Pontesegger@xxxxxxxxxxxx> wrote:

Hi,

 

There are 2 different use cases when to access the remote location:

 

1)      When fetching scripts and parsing for update on the script keywords
Here we should use the github API as good as possible. What we need here is a list of files (ev. Recursively) and their commit hashes so we can detect if they have changed.
This process is done on Eclipse startup and then on a regularly basis (currently once an hour). Not sure about the hashes, but if you can get the directory structure with a single call, the 50 calls/h limit might not be a problem here.

2)      When executing scripts we might access the http link directly to fetch the raw content of the file, eg some link like this:
https://raw.githubusercontent.com/eclipse-ease-addons/engines/master/pom.xml
We do not know how often a user executes a script or how often it might get triggered automatically. If we do have a limit on accessing such links then we need to think of caching script content right away. Otherwise it could be postponed and seen as a performance improvement.

 

So the HttpParser might not be needed at all. We then have to make sure that the current HttpParser does not try to parse such locations and that the parsing is relayed to the GithubParser.

 

Christian

 

From: ease-dev-bounces@xxxxxxxxxxx [mailto:ease-dev-bounces@eclipse.org] On Behalf Of Utsav Oza
Sent: Thursday, March 09, 2017 3:18 PM
To: ease developer discussions
Subject: Re: [ease-dev] Introduction

 

Hi Christian,

 

Regarding fetching the scripts, when user initially sets a remote location (say GitHub), its domain is identified from the address and the scripts are fetched to the local machine. I have created a class for fetching those scripts from GitHub repositories. As of now it requires one API call for each scripts it fetches but it can be reduced to one API request to fetch all scripts from a given repository on GitHub. The number of API requests do matter here as GitHub provides a maximum of 5000 requests per hour for an authenticated user and 60 for an unauthenticated user for requests regarding content. So if the corresponding address is identified, it makes sense to fetch Git credentials stored in Eclipse and if not, ask the user for the credentials.

So, instead of extending http parser as pointed out here, wouldn't it be better to use a separate class that parses the location using web API as HttpParser would recursively look for anchors to fetch download urls for raw files and GitHub provides a JSON response of the recursive tree (containing urls for all the files and directories in the repository) for a particular repository in one request ? Can we consider this as a legit approach ?

In case of gerrit and svn, a different approach might be required.

 

Thanks again for helping

Utsav Oza


_______________________________________________
ease-dev mailing list
ease-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/ease-dev


Back to the top