Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
[p2-dev] FYI Fw: [mancoosi-all-members] MISC 2010: the Mancoosi Internal Solver Competition, reloaded!

I want us to participate using parts of p2 so we can savour the Champagne at EclipseCon :)


----- Forwarded by Pascal Rapicault/Ottawa/IBM on 04/12/2009 09:36 AM -----


From:

Roberto Di Cosmo <roberto@xxxxxxxxxxx>

To:

mancoosi-all-members <mancoosi-all-members@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>

Date:

04/12/2009 09:24 AM

Subject:

[mancoosi-all-members] MISC 2010: the Mancoosi Internal Solver Competition, reloaded!




Dear all,

after the first test in Lisbon, which helped identify a set of issues
in running a competition of solvers on CUDF instances, we have now all
the pieces in place to successfully organise a realistic internal
solver competition.

In what follows, you will find all relevant information on how to
participate in the competition.

This time, there will be a real winner, and the winner will get a
serious bottle of french Champagne, delivered officially at the Nice
meeting on January 7th.

If you have any doubts or comments on the rules, please let us know as
soon as possible.

And now, let the game begin!

--Ralf and Roberto


IMPORTANT DATES
===============

We will need a few days to run all of your solvers on a significant
data set, and rank them according to the rules detailed below, so here
are the important dates:


* Friday, december 11, 2009, at noon: deadline for declaring the intent
 to participate, by email to:

   mancoosi-competition@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

* Debugging phase: from 7/12 to 18/12 at noon you will be able to
 interact with Pietro and Zack to make sure your solver instance does
 execute properly on the Mancoosi servers.

* Friday, december 18, 2009, at noon: deadline for submitting a first
 version of your solver which executes properly on the Mancoosi
 servers; after this date, no support is to be expected in debugging
 your solver on the Mancoosi servers. Submission is by email to the
 above address.

* Improvement phase: until january 3rd you may submit improved versions
 of your solver(s), to the above address.

* 3/1 midnight: latest date for submitting the final solver instance for
 the competition.

* 4/1 to 6/1: execution of the solver competition on the Mancoosi
 servers: the final solver instances will be run on the data sets.

* 7/1, in Nice: Announcement of the result (and delivery of the bottle :-))


THE RULES OF THE GAME
=====================

Optimization critera:
---------------------

Two fixed optimization criterias will be used in this version of the
competition: they are both lexicographic combinations of four simple
integer valued utility functions of a solution, which we describe
below.

Let's call U the universe of a CUDF, which contains the description of
all available packages, and the field Installed: which is set to 1 for
packages installed on the user machine, and 0 otherwise.

Let's call S a solution to the user request computed by your solver.

We write V(X,name) the set of versions in which name (the name of a
package) is installed in X, where X may be U or S. That set may be
empty (name is not installed), contain one element (name is installed
in exactely that version), or even contain multiple elements in case a
package is installed in multiple versions.

We write

- #removed(U,S): is the number of packages REMOVED in the proposed
 solution S w.r.t. the original installation U :

 #removed(U,S) = #{name | V(U,name) nonempty and V(S,name) empty}
         
- #new(U,S): is the number of NEW packages in the proposed solution S w.r.t.
 the original installation U :

 #new(U,S) = #{name| V(U,name) empty and V(S,name) nonempy}
         
- #changed(U,S): is the number of packages with a MODIFIED (set of) version(s)
 in the proposed solution S w.r.t. the original installation U :
 
 #changed(U,S) = #{name | V(U,name) different to V(S,name) }

- #uptodate(U,S): is the number of packages with the LATEST available version
 in the proposed solution S :
 
 #uptodate(U,S) = #{p in S| V(S,name) contains the latest available version
                    of name in S}


The two optimization criteria are

PARANOID: we want to answer the user request, minimizing the number
          of packages removed in the solution, and also the packages
          changed by the solution; the optimization criterion is
         
               lex( min #removed, min #changed)

          Hence, two solutions S1 and S2 will be compared as follows:

          i) compute ri = #removed(U,Si), ci = #changed(U,Si)

          ii) S1 is better than S2 iff r1 < r2 or (r1=r2 and c1<c2)
         

TRENDY:   we want to answer the user request, minimizing the number
          of packages removed in the solution, maximizing the number
          of packages at their most recent version in the solution, and
          minimizing the number of extra packages installed;
          the optimization criterion is
         
               lex( min #removed, max #uptodate, min #new)

          Hence, two solutions S1 and S2 will be compared as follows:

          i) compute ri = #removed(U,Si), ui = #uptodate(U,Si), ni = #new(U,Si)

          ii) S1 is better than S2 iff
              r1 < r2 or (r1=r2 and (u1>u2 or (u1=u2 and n1<n2)))


Determining the winner on the competition data sets
---------------------------------------------------

All participating solvers will be executed on the same set of input
problems. The exact number of input problems remains to be determined.
The solvers will be ranked as follows: let m be the number of
participating solvers.

For each input problem we give points to each solver, according to
whether the participant yields

(1) a claimed solution that really is a solution
(2) FAIL (that is, the solver declares that he hasn't found a solution)
(3) abort (timeout, segmentation fault, ...)
(4) a claimed solution, which in reality is *not* a solution to the problem

The solvers in case (1) are ordered according to the optimization
criterion (best solution first), and a participant in case (1) gets
the number of points that corresponds to his positions in that
list. For example, if solvers A, B, C, D have found a valid solution,
if the solution of A is the best, the ones found by B and C are
equally optimal, and the one found by D is the least optimal, then A
gets 1, B and C both get 2, and D gets 4.

A participant in case (2) gets 2*m points
A participant in case (3) gets 3*m points
A participant in case (4) gets 4*m points


The total score of a solver is the sum of the number of points
obtained. The solver with the minimal score wins.

Note that it is possible that an input problem indeed doesn't have
a solution. In that case, (1) above is not possible, so the solvers
that correctly say FAIL are ranked best.


Training sets
-------------

There are two training data sets available for the mini-competition
(these are *not* the problem sets that we will be using in the real
competition, but you need them to test that your solver performs
correctly).

They are available on

   
https://mancoosi.org/~abate/cudfproblems/

and are randomly generated problems simulating possible upgrade
requests of a real debian machine, using different repositories

  rand.biglist : upgrades using packages from etch + testing + unstable
  rand.smallist : upgrades using packages from testing + unstable



TECHNICAL INFORMATION
=====================

All solvers must be able to treat as input a cudf 2.0 document and
output a solution on the form of another cudf 2.0 document containing
a new universe of packages satisfying the request.

The resources you need to parse cudf 2.0 files are available on the
web page

 
http://www.mancoosi.org/software/
 
The first useable version is libCUDF 0.5.92, downloadable as
http://gforge.info.ucl.ac.be/frs/download.php/154/cudf_0.5.92.tar.gz
but you can also get the latest development version on the forge here
https://gforge.info.ucl.ac.be/svn/mancoosi/trunk/updb/dose3/libcudf

Bindings are available for C, the library is natively OCaml, see the
INSTALL and README files.

For Debian users, packages are already available (see the web page).

You can use the problems in the training set to test your solvers.

A primer for CUDF is available at
http://www.mancoosi.org/cudf/primer/

The (long) specification of CUDF, if you have any doubts, is
here:
http://www.mancoosi.org/reports/tr3.pdf

As a last resort, you can get in touch with Pietro or Zack to get help.


Submitting solvers
------------------

The instructions on how to submit your solvers to the competition are
available here

https://gforge.info.ucl.ac.be/svn/mancoosi/trunk/competition/solver-api.txt


GIF image


Back to the top