List of projects

Computational Foundations of Social Choice (CFSC)

(DFG, ISF, NWO, TÜBITAK) 

This collaborative research project addressed key issues in computational social choice, an interdisciplinary field of study at the interface of social choice theory and computer science. Computational social choice is concerned with the application of computational techniques to the study of social choice mechanisms, such as voting rules and fair division protocols, and with the integration of social choice paradigms into computing. The project combined the expertise of some of the most active researchers in the field, who have worked on different aspects of computational social choice in the past, and who have come to this area from very different backgrounds: theoretical computer science, artificial intelligence, logic, economics and political science.

The Amsterdam group studied applications of logic in a number of areas in social choice theory. One strand of work concerns the first computational study of the framework of judgment aggregation, which deals with the aggregation of propositions expressed in a logical language. We have established the computational complexity of problems such as computing the collective judgment, of manipulating a judgment aggregation procedure, and of checking whether consistency of the outcome can be guaranteed for a given set of propositions. A second line of work concerns the modelling of preferences using logic-based languages. In cooperation with the group at LAMSADE, we have investigated the properties of the framework of weighted goals for preference representation in great detail. We have also extended this framework using the tools of linear logic to allow for a better modelling of domains exhibiting multiplicity of items. This has applications in fair division and combinatorial auctions.

The group in Jerusalem investigated the problem of coalitional manipulation in elections; we put forward efficient algorithms for the problem in Borda, maximin and plurality with runoff voting, and analysed their windows of error. Specifically, given an instance in which an algorithm fails, we bounded the additional power the manipulators need in order to succeed. In other work, we investigated the possibility of stabilising a coalitional game by using external payments (the minimal necessary such payments is called ‘the cost of stability’). We proved general bounds on the cost of stability in several classes of games and provided a detailed algorithmic study of the cost of stability in weighted voting games. We also extended our model and results to games with coalition structures.

The Düssseldorf group studied the computational complexity of control, manipulation and bribery in a variety of models.  Among the main achievements are the identification of natural voting rules (namely, Bucklin, fallback voting and sincere-strategy preference-based approval voting) whose winners can be identified in polynomial time, but that have the broadest computational control resistance currently known to hold. In addition, the complexity issues regarding control, bribery and microbribery have been completely settled for the entire family of Llull/Copeland voting rules.  For single-peaked electorates in a variety of voting rules, we showed that complexity shields for manipulation and control may evaporate, stay in place or can even be erected, depending on the given scenario. Other topics studied in Düsseldorf include cake-cutting algorithms, probabilistic lobbying and the complexity of some variants of the possible winner problem.

The group in Munich investigated the axiomatics and computational aspects of concepts and voting rules that are based on pairwise majority comparisons. We proposed a systematic methodology for defining such concepts using the notions of qualified subsets, von Neumann-Morgenstern stable sets and Schwartz retentive sets, and studied their relationship to rationalisability and strategyproofness. On the computational side, we obtained preprocessing techniques via modular decomposition, intractability results, efficient algorithms and heuristics in this context. Particularly noteworthy is an NP-hardness proof of computing the tournament equilibrium sets as well as some progress towards solving an important conjecture associated with this concept.

During the course of this project, Professors Jean-Francois Laslier and Remzi Sanver finalised the edition of the Handbook on Approval Voting (published by Springer) with several chapters written by one or more project members. A special workshop, attended by researchers inside and outside the project, was organised in Palaiseau (France) on the occasion of the book’s release.

Members of the project consortium also organised a Dagstuhl seminar on ‘Computational Foundations of Social Choice’ (March 2010), two international workshops on Computational Social Choice (September 2008 and September 2010) and an IJCAI workshop on Social Choice and Artificial Intelligence (July 2011), and edited three special issues of international journals on computational social choice.

Further information: CFSC website

Project Leader: Felix Brandt, Technische Universität München (TUM), Germany

Principal Investigators:

  • Ulle Endriss, Universiteit van Amsterdam, the Netherlands
  • Jeffrey Rosenschein, The Hebrew University of Jerusalem, Israel
  • Jörg Rothe, Heinrich-Heine-Universität Düsseldorf, Germany
  • Remzi Sanver, Bilgi University, Istanbul, Turkey

Associated Partners:

  • Vincent Conitzer, Duke University, Durham, United States
  • Edith Elkind, University of Southampton, United Kingdom
  • Edith Hemaspaandra, Rochester Institute of Technology, United States
  • Lane Hemaspaandra, University of Rochester, United States
  • Jérôme Lang, Université Paul Sabatier, CNRS, Toulouse Cedex 04, France
  • Jean Francois Laslier, École Polytechnique, CNRS, Paris, France
  • Nicolas Maudet, Université Paris 9 Dauphine, Paris Cedex 16, France

Dialogical Foundations of Semantics (DiFoS)

(DFG, FCT, NWO)


One of the distinguishing features of the DiFoS project is the combination of technical, philosophical and historical work. The work focused on two main research lines that have been interactively developed in parallel: on the one hand, the relation of traditional – especially medieval – dialectical practices and modern dialogical conceptions of semantics; on the other, the comparison of the modern dialogical approach with other related ones, particularly on the proof-theoretic side.

A main subject of the Amsterdam node was the relationship between the medieval tradition of obligationes and modern dialogical logic. Here Dr Uckelman provided a framework for the comparison of these traditions in order to answer the question: obligationes and modern dialogical logic are distinct and unrelated formal approaches to logic, despite the game-theoretic flavour that they both share.

The comparison between the Western historical tradition of dialogue-based logical analysis and modern dialogical logic naturally led to a question that was not originally foreseen in the project proposal, but turned out to be extremely fruitful: how many of the techniques used for the comparison of obligationes and modern dialogical logic can be used to analyse historical dialogical traditions of other cultures. The project ‘Dynamic and Dialogical Approaches to Historical Logic (DDAHL)’ was developed in the Amsterdam node of the CRP and provided a red thread in the research activity, applying the techniques in particular to the Indian dialogical tradition. The cross-CRP event ‘Dialogues and Games’ in Lille (February 2010) was one of the first events in a series of meetings dealing with these issues.  The formal reconstruction techniques used in DiFoS can also be used in fields other than historical traditions of logic. Professor Löwe has successfully applied them in the formalisation of narratives.
On the proof-theoretic side, which was essentially the topic of the Lisbon and Tübingen nodes, the main objective of the CRP was the relationship between proof-theoretic and dialogical/game-theoretic approaches to semantics.

a)  The structural relations between dialogues and sequent calculus have been studied in terms of a format which presents dialogues and winning strategies in sequent-style. This format helps to adapt methods from proof theory and also to import ideas from dialogues into proof theory. The proof-theoretic treatment of definitional reflection, which is a universal approach to deal with inferential definitions of logical and non-logical constants, has been carried over to the dialogical framework. A completeness result for the intuitionistic sequent calculus with complex initial sequents in relation to the corresponding dialogical system for intuitionistic logic has been established, and the dialogical system has been extended by a certain argumentation form for definitional reasoning. The special case of semantical paradoxes, which feature prominently in philosophical theory, was analysed as a test case for the dialogical approach in comparison with the proof-theoretic treatment. The dialogical approach to semantics is grounded on the idea of a basic symmetry represented by the opposite roles played by the two agents engaged in a dialogue. The possibility of displaying such a duality also in the proof-theoretic approach has been thoroughly investigated. Although it is possible to 'dualise' the basic proof-theoretic notions by introducing refutations alongside with proofs, it is doubtful that a full analogy with the dialogical setting can be achieved.

b) In relation to this point, the conception of implications as rules, which lies at the heart of computational approaches to implication, was investigated in the dialogical setting. An alternative framework was developed, which diverges from the standard symmetric treatment of implication.  A precise comparison of proof-theoretic and dialogical semantics of implications as rules showed that the asymmetry in deductive reasoning, which is the distinguishing feature of the proof-theoretic approach, cannot be given up completely when switching to the dialogue-theoretic approach, which is inherently more symmetric.

c) The comparison of different semantic frameworks has been extended to comprise also the more traditional truth-theoretic approach. The idea of two layers in the structure of a semantic theory (tentatively referred to as ‘categorical' and 'hypothetical' or 'functional') naturally applies to the dialogical setting. Here, an analogous distinction takes place between the level of play (i.e., of a single dialogue) and of strategy (i.e., of all possible dialogues with respect to a certain proposition). The choice of which of the two layers is to be assigned priority seems to be critical for the architecture of a semantic theory. The work carried out provides a general pattern of analysis that could be fruitfully applied to other frameworks as well.

These three intertwined issues were presented and discussed at the cross-CRP workshop on ‘Proof and Dialogues’ in February 2011 in Tübingen, which was the first conference ever explicitly devoted to the relation between dialogical and proof-theoretic semantics.  Compared to what was envisaged in the original application, the results of the Tübingen and Lisbon groups exhibit a more fine-grained and sophisticated picture of the two approaches. Although certain conceptual distinctions are indeed available in both frameworks, the dialogical framework can more successfully cope with phenomena involving some kind of symmetry, such as negation, whereas the proof-theoretic approach reveals all its power in describing asymmetric features of reasoning, such as implication-like operators.


Further information: DiFos website 

Project Leader: Peter Schroeder-Heister, Universität Tübingen, Germany

Principal Investigators:

  • Reinhard Kahle, Universidade Nova de Lisboa, Caparica, Portugal
  • Benedikt Löwe, Universiteit van Amsterdam, the Netherlands

Games for Analysis and Synthesis of Interactive Computational Systems (GASICS)

(CNRS, DASTI, DFG, FNRS)

To illustrate the scientific contributions obtained in the GASICS project, we give several examples of concrete progress in the four main topics structuring the project:

•    Multi-player non-zero sum games: The notion of Nash equilibrium has been studied for multi-player non-zero sum games played on graphs, both for Boolean objectives and for quantitative objectives.  The existence of Nash equilibria has been proven in the two cases and computational complexity questions have been settled.  Variants like secure equilibria or subgame perfect equilibria have also been studied.

•    Quantitative games: Quantitative games played on weighted graphs are important mathematical models for synthesis of optimal or Pareto optimal controllers.  In the project, progress has been made on this important topic. For example, we have improved existing algorithms for solving mean-payoff games. We have also extended mean-payoff and energy games from the one-dimensional case to the multi-dimensional case.

•    Games with imperfect information: When modelling reactive systems made of several components, we need to explicitly take into account that each component has a partial view of the system in which it operates.  This partial view leads to imperfect information, e.g., in a multi-process system a process does not have access to the private variables of other processes.  Games of imperfect information are usually more expensive from a computational complexity point of view.  We have developed a new symbolic data-structure to efficiently handle state spaces underlying the analysis of games with imperfect information. This new data-structure, based on the notion of ‘antichain’, has several other applications in the field of automata theory.

•    Implementation and heuristics: To evaluate the possibility of tool-based automatic synthesis of reactive systems using game theory, we have started the implementation of tools for synthesis. In particular, we have implemented a tool for solving timed games. This tool is a variant of the tool set UppAal which has been developed in Aalborg (PI2). The UppAal-Tiga allows for the modelling of a timed game using a product of timed automata and it is able to decide if the controller has a winning strategy in the timed game. If it is the case, the tool allows the user to extract a winning strategy that can then be turned into a programme.

Further information: GASICS website

Project Leader: Jean-François Raskin, Université Libre de Bruxelles, Belgium

Principal Investigators:

  • Wolfgang Thomas, RWTH Aachen University, Germany
  • Kim Larsen, Aalborg University, Denmark

Associated Partners:

  • Marcin Jurdzinski, University of Warwick, United Kingdom
  • Jean-Eric Pin, Université Paris Denis Diderot, CNRS, Paris, France
  • Nicolas Markey, LSV, École Normale Supérieure, Cachan, France

NEWS: L'Oréal - UNESCO grant for Julie de Pril (GASICS Project Member)

The Logic of Causal and Probabilistic Reasoning in Uncertain Environments (LcpR)

(DFG, FWF, GACR)

The team of Professor Gerhard Schurz developed a computer simulation comparing different logico-probabilistic reasoning systems. Four prominent systems were investigated, the systems O, P, Z and QC (quasi classical). These systems differ in the number of inferences they license (O ⊂ P ⊂ z ⊂ QC). Systems that license more inferences enjoy the possible reward of deriving more true informative conclusions, but with this possible reward comes the risk of drawing more false conclusions. Each of the four systems was extended by appeal to theorems that allow one to compute almost-tight lower-probability bounds for the conclusion of an inference, given lower-probability-bounds for its premises. The results suggest that system Z offers the best balance.

Professor Max Kistler investigated causality and causal reasoning within a causation as transmission approach. Professor Kistler argued against the intervention account of causality and against modelling causal structures by Bayesian networks. In physics we find many examples of simultaneous lawful dependence. These laws involve quantities that are related to one another in equations. Equations, however, may be solved for any of the variables they contain. Consider, for example, a magnetic stirrer rotating in a magnetic field with a certain angular momentum and a certain magnetic momentum (keeping other variables fixed).The causal law relating the variables is expressed in an equation which may be solved for the angular momentum or the magnetic momentum with equal rights.

By adopting a coherence-based approach to probabilistic default reasoning, Professor Angelo Gilio studied the quasi conjunction (a basic notion for defining consistent conditional knowledge bases) and the Goodman and Nguyen inclusion relation for conditional events. Given any family of conditional events F, they proved that:
(i) F p-entails the quasi conjunction C(S) for every subset S of F;
(ii) p-entailment from F is equivalent to p-entailment from C(S), for some nonempty subset S of F.
Professor Gilio characterised p-entailment by some alternative theorems. Finally, given any pair (F, E|H), they deepened the connections between p-entailment and inclusion relation by introducing the class K of subsets S of F such that C(S) implies E|H. We show that K is additive and that its greatest element can be determined by a suitable algorithm.

As an alternative to the well-known Bayesian networks, Professor Radim Jiroušek has shown that conditional independence models can also be represented with the help of a composition operator defined for marginal distributions. He generalised this composition operator to Shafer-Dempster belief functions and to possibility functions. A number of new properties of conditional independence within these approaches emerged during this work.

Dealing with causality it is important to have the possibility to study causal associations and causal chains. The models have to be able to cope with the concepts like counterfactual reasoning. Until recently, mathematical models were usually founded on the basis of probabilistic graphical models, mainly Bayesian networks and influence diagrams. The new approach does not employ graphs to describe structural properties of the distributions. Instead, the process of how the distributions can be assembled (composed) from its parts (marginals) is described. Studying algebraic properties of the operator of composition, it was found that this approach is also advantageous for describing causal relationships. These models can easily describe how causality is processed within the model.

In the group of Professor Gernot Kleiter principles of probability logic were used to model human inference. The human understanding of conditionals and human inference in a probabilistic setting were investigated experimentally. About two-thirds of the participants understand conditionals as conditional events and one-third as conjunctions. When a sequence of 50 or 70 conditionals was presented, many participants shifted from conjunction to a conditional event interpretation. A test for working memory capacity was included in one experiment. Only a low correlation (.30) between the interpretation (conjunction vs conditional event) and one of the memory test variables was found.

If one wants to get an understanding of the nature of a cognitive capability it is a good strategy to study its origin and development. The groups of Professor Josef Perner and Dr Sarah Beck investigated the origins of counterfactual reasoning in children. Both groups developed prototypical scenarios in which children first could play out a ‘story’ and afterward be asked “What if...would not have been the case?". The distinction between a hypothetical version and a genuinely counterfactual version of problem is critical.In Professor Perner's project various age groups were investigated, children between three years and fourteen years old, adolescents and adults. Only 42% in the 5.0 to 6.5 years group give the correct answers to the genuinely counterfactual version, but 92% to the hypothetical version. The results support the conclusion that children are capable of hypothetical reasoning before six, but to counterfactual reasoning only after ten. There are remarkable parallel developmental lines of counterfactual reasoning and the development of children's theory of mind.One aim of Dr Beck and her group was to find out when children are capable of counterfactual reasoning. She explored how counterfactual reasoning can be distinguished from other types of reasoning. When a counterfactual scenario is being elaborated cognitively any open assumptions must be filled by facts from the real world scenario as far as logically possible. Counterfactual reasoning has to follow the nearest possible world constraint (Lewis). Prior studies yielding individual differences data suggested that counterfactual thinking may be related to overcoming prepotent responses. In two experiments she manipulated how three- to five-year-olds responded to counterfactual conditional and syllogism tasks. Children's performance improved on both conditional and syllogism tasks when they responded with an arrow, rather than pointing with their fingers. Three- to four-year-olds benefited from both an arrow manipulation and, separately, the introduction of a delay before responding. We suggest that both manipulations help children to overcome an impulsive prepotent response to counterfactual questions, arising from a default assumption that information about the past is true.

Further information: LcpR website

Project Leader: Gernot Kleiter, University of Salzburg, Austria 

Principal Investigators:

  • Radim Jirousek, Institute of Information Theory and Automation, Prague,  Czech Republic
  • Josef Perner, Universität Salzburg, Austria
  • Gerhard Schurz, University of Düsseldorf, Germany

Associated Partners:

  • Sarah Beck, University of Birmingham, United Kingdom
  • Angelo Gilio, University of Rome, La Sapienza, Italy
  • Max Kistler, University of Grenoble and Jean Nicod Institute, Paris, France

Logic for Interaction (LINT)

(AKA, DFG, NWO, VR)

The LINT CRP has achieved a qualitative improvement in the understanding of the logic of the concept of dependence, such as the dependence of a move of a player on previous moves, and of independence, such as the independence of a move of a player on a move he or she has not seen. At the beginning of the project these concepts were understood in terms of so-called dependence logic and so-called independence friendly logic. During the project a wealth of related concepts emerged, some of them interestingly probabilistic. It is remarkable that, as it turned out, many of these concepts have previously also arisen in database theory with slightly different emphasis, but the connection had eluded observation.

It now seems that there is a general theory of dependence and independence, based on ideas within logic, computer science and mathematics, that underlies many seemingly unrelated topics in a multitude of areas of science, including physical, behavioural, social and biological sciences. This development, based on analyses and results within the LINT CRP, seems very promising and will be further pursued in future research projects.

An important observation of the LINT CRP at the beginning of the project was the realisation that there is an undiscovered connection between dependence logic and intuitionistic logic. It turned out that a proper use of intuitionistic logic leads to a deeper understanding of dependence logic and reveals an unexpected connection to second-order logic. Reverberations of this observation were felt throughout the project and the full impact of it has probably not yet been perceived.

Game Theoretic Semantics is one of the underlying themes in LINT. It is based on the concept of the existence of a winning strategy. Therefore it was self-evident that there is a connection to existential second-order logic. During the LINT CRP this connection was put under a microscope and completely analysed. As a result a fairly complete picture emerged and one can now pinpoint exactly what the second-order equivalent of most logics that have arisen around the concept of dependence is. This is important for conceptual reasons but it has also led to hierarchy theorems that delineate the complexity of dependence and independence logics. In another direction new resources known as Nominal Game Semantics were developed, leading to concise mathematical models, which capture realistic fragments of dynamic programming languages such as ML.

In their full power the logical systems considered in the LINT CRP have high complexity. This led the project to consider more manageable fragments such as modal logics and 2-variable logics. In both cases a rather complete picture emerged. The results on decidability and complexity delineated nicely the difference between dependence and independence: the satisfaction problem for 2-variable dependence logic turned out to be EXPTIME complete but the same for 2-variable independence friendly logic turned out to be undecidable.
Compositionality has been a recurring theme in the area of the LINT CRP. During the project major steps forward were taken by demonstrating that translations between certain of the relevant logics were non-compositional. On the other hand, it was established that even the probabilistic game-theoretic semantics permits a compositional rendering.

The LINT CRP also proposed a new approach to the problem of logical constants. According to this approach the logical constants are the output of a new operation of ‘constant extraction’, dual to logical consequence as a function from sets of symbols to consequence relations. By its generality, the method is relevant to a variety of logical frameworks studied within the LogICCC programme. In a nutshell, the idea is to combine Bolzano’s insight that any choice of constants determines a semantic consequence relation with a method for going in the converse direction: to extract, from any given consequence relation, its constants. The method might thus be called consequence mining.

Another breakthrough within the LINT CRP was the emergence of a framework for capturing the dynamics of information in games on graphs. Since the 1980s, it was known that the elementary questions about non-terminating games with imperfect information are undecidable. A technique was developed for tracking the knowledge acquired by the members of a coalition during a play, leading to a comprehensive decidability criterion and a determinisation construction. A method was established for extracting the perfect-information structure of a game with imperfect information, based on a new concept of strategic independence. Applications to the imperfect information games behind independence logic and independence friendly logic remain a topic for future research.

Further information : LINT website 

Project Leader: Dag Westerståhl, Göteborg University, Sweden

Principal Investigators:

  • Jouko Väänänen, University of Amsterdam, the Netherlands
  • Erich Graedel, RWTH Aachen University, Germany
  • Lauri Hella, University of Tampere, Finland

Associated Partners:

  • Samson Abramsky, Oxford University, United Kingdom
  • Gabriel Sandu, Université Paris I, CNRS / ENS, Paris, France

Logical Models of Reasoning with Vague Information (LoMoReVi)

(FWF, GACR, CICYT)

The CRP has significantly contributed to the maturity of Mathematical Fuzzy Logic (MFL) as a research field combining traditional topics of mathematical logic, focused on a wide range of many-valued logics, with further perspectives on formal models of reasoning under uncertainty, as well as methods and themes from computer science, like automated deduction and complexity theoretic aspects. This progress is not just witnessed by a continuing stream of papers on algebraic, model- and proof-theoretic  aspects of various logics, but - more important to the aims of LogICCC and LoMoReVI - by an intensified focus on the relation between MFL and theories of vagueness (e.g., supervaluation, contextualism, plurivaluationism, epistemicism). Moreover, a number of our results connect MFL with related research paradigms, like probability and possibility theory, truthlikeness and similarity based reasoning.

As a specific highlight that also reflects the indicated development we mention the Handbook of Mathematical Fuzzy Logic, edited by the LoMoReVI members Dr Petr Cintula, Professor Petr Hajek and Dr Carles Noguera: 6 of its 11 chapters are (co)authored by members of LoMoReVI.

We list a few further scientific achievements in somewhat telegraphic style:

  • Generalisation of Giles's dialogue game with betting scheme for Lukasiewicz logic to further t-norm based fuzzy logics. A related, more recent result demonstrates how to extract truth functions of a many-valued logic from a generalised form of the game. Conversely, it is shown that a wide range of many-valued logics, including all finitely-valued logics, can be characterised by such a game.
  • Clarification of the relation between hypersequent based proof systems and winning strategies in Giles's game. It has been shown that the logical rules of a particular analytic hypersequent system for Lukasiewicz logic can be interpreted as instructions for the systematic construction of strategies for one of the players in the game, where due to ignorance about the evaluation of final game states (unknown risk values) different choices have to be registered disjunctively in states.
  • Evaluation games for the characterisation of logics that arise from Shapiro's contextualist model of vagueness. This provides a basis for combining (at least) three aspects of models of reasoning with vague propositions: context dependence, alternative semantics of connectives and quantifiers, interaction of speakers.
  • Exhaustive study of minimal modal expansions over finitely-valued fuzzy logics and axiomatisations over several classes of modal frames.
  • Definition of different fuzzy modal logics to formalise reasoning about the uncertainty (probabilistic, possibilistic) of fuzzy events.
  • Formalisation of logical consequence operators in the frame of similarity-based reasoning.
  • Advances in the study of fuzzy description logics: decidability results and finite model property.
  • New results on arithmetical complexity and model theory for first order fuzzy logics.

Given the wide range of results it is hard to single out a particular result as most significant or valuable. However, the five joint publications listed at the end of this section represent a fair sample of what has been achieved in the project.

Further Information: LoMoReVi website

Project Leader: Christian Fermüller, Vienna University of Technology, Austria 

Principal Investigators:

  • Lluis Godo Lacasa, Institut d'Investigacio en Intelligència Artificial (IIIA), Bellaterra, Spain
  • Petr Hajek, Institute of Computer Science, Prague, Czech Republic

SOCIAL SOFTWARE for elections, the allocation of tenders and coalition/alliance formation (SSEAC)

(AKA, DFG, CICYT) 

This collaborative research project contributed to the scientific goal of the LogICCC programme by improving existing social mechanisms and behaviour, including their links with new computational advances, based on logical formalisms, in particular algebraic logic.

This CRP was multidisciplinary (using logic, mathematics, economics, computer science and political science), contributed to broader insight and theory building in the field of elections, allocation of tenders and coalition/alliance formation, and made innovative use of logical formalisations in its application areas.

Apart from theory building and logical modelling, the research has important practical applications, which are essential for a sound future for Europe: a fair election system, a fair allocation of tenders, a fair procedure for coalition and alliance formation. The European added value of the collaboration is evidenced by the fact that each group has specific competences, which are combined in the CRP, resulting in procedures and software products that could never have been developed by just one group alone. That the allocation of tenders gives rise to similar problems as the aggregation of preferences is not generally known, and causes many cases in court. Hence a solution to this problem is of great practical value.

SSEAC organised four workshops: ‘Social Software and Computational Social Choice’, in Lyon (France), April 1-2, 2009; ‘New approaches to Voting and Social Choice’, in Tilburg (the Netherlands), May 25-26, 2009; ‘Voting and Allocation Systems’, in Turku (Finland), June 8-9, 2010; and a final workshop which will take place in Kiel (Germany), May 3-4, 2012.

IP1:  The extension of Majority Judgment based on distances has been decomposed in two stages. The first one assigns a collective assessment to each profile of individual assessments. This procedure has been conducted by means of appropriate aggregation functions by considering suitable metrics that guarantee interesting properties. The second stage consists of a tie-breaking process, also based on distances. The complete procedure assigns a weak order on the set of alternatives to each profile of individual assessments. It satisfies suitable properties within the social choice framework. The devised decision making method has been applied to the allocation of tenders within a multi-criteria setting in such a way that some of the drawbacks denounced by Chen have been eliminated.

IP2: Based on a lot of investigations and experiments made with the BDD-based tool RelView, a new method has been developed to represent simple games as BDDs and to solve a lot of problems on simple games by working directly on the BDD-representations and not indirectly via relational operations only as RelVIEW does. Compared with RelVIEW this led to much faster game-theoretic algorithms. Concerning efficiency in general, in most cases the runtimes of our algorithms linearly depend on the sizes of the BDDs. Due to this property, the new approach proved to be at least of the same value as known approaches, like the use of generation functions. In many practical experiments it even proved to be superior.

IP3: We studied fundamental problems of opinion modelling with implications for committee composition and social choice, connections and implications of the Ostrogorski paradox for spatial voting models, and continuity and measurability of Arrovian social welfare functions.
 
AP1: We have developed a procedure, partly based on Balinski and Laraki's ideas and partly based on the ideas of Range Voting and Borda, called the Borda majority count, which  - based on evaluations of the voters of the different parties - yields a seat distribution of parties in parliament. We are studying the properties of this procedure.
We have applied several procedures - plurality vote, Borda count, approval voting and the Borda majority count - to a set of about 7,000 data, obtained from the LISS (Longitudinal Internet Studies for the Social sciences) panel and compared the different results for the parliament in the Netherlands.

AP2: Social networks play a central role in our activities, in social phenomena, in economic and political life. They are particularly important in studying all kinds of influence phenomena, and are very useful for analysing the diffusion of information and the formation of opinions and beliefs. As we have provided an exhaustive analysis of social network structures and studied the impact they may have on human behaviour, we believe that our investigations are of significant importance.

Further information: SSEAC website  

Project Leader: José Luis Garcia Lapresta, Universidad de Valladolid, Spain 

Principal Investigators:

  • Rudolf Berghammer, Christian-Albrechts-Universitaet Kiel, Germany
  • Hannu Juhani Nurmi, University of Turku, Finland

Associated Partners:

  • Henricus Swart, University of Tilburg, Netherlands
  • Agnieszka Rusinowska, Ecole Normale Superieure LSH, Université Lumière Lyon 2, CNRS, Ecully, France

Vagueness, Approximation, and Granularity (VAAG)

(DFG, HR, NWO, VR)

Vagueness has often been seen as the antithesis of logic, and work in several fields that use logic has begun to address vagueness only recently using the tools of modern logic. The VAAG project has made tremendous progress addressing vagueness and related phenomena from the perspectives of the five subfields involved - logic-based cognitive science, model-theoretic semantics of natural language, formal pragmatics, general psychology and computer science - and to integrate these perspectives.  Three highlights of our results concern the questions why natural language is vague, how are vague concepts represented and how vagueness can be empirically measured.

Why is language vague? One of the most influential findings of recent research on vagueness is Professor Krifka's observation that language often prefers to be vague: e.g., it's more natural to talk of a two hour meeting rather than 2'13''12.5''' meeting even if the latter is known to be accurate. Professor Gärdenfors has argued furthermore that language must be vague because of its finiteness. To explain the preference for vagueness, Dr van Rooij and colleagues have developed a model within signalling game models. The model predicts that vagueness is a natural property of meaning that evolves when boundedly rational agents repeatedly engage in cooperative signalling.

How are vague concepts represented?  The vagueness of concepts can often be traced to effects of their core meaning. But there are exceptions: in a line of research Dr Solt has pursued, quantifiers like ‘more than half’ and ‘most’ seem to have the same core meaning, but differ for borderline cases like that of a 51% majority.  In work with Dr Sauerland and others, Dr Solt further extended this type of research to the comparison of unmodified numerals like ‘120’ and comparative quantifiers such as ‘more than 120’. They argue that the latter trigger a granulary related inference (‘less than 150’) that results in a weaker interpretation of the bare numerals.

How can vagueness be measured empirically?  The classical conception of vagueness revolves around the notion of an unclear border: we find it hard to decide whether a 180cm tall man is tall.  But finding other measures relating to vagueness is crucial to develop. Dr Palmovic and colleagues have approached vagueness with a number of neuro-cognitive methods and have found evidence for vagueness in their ERP data. Mr Bååth and colleagues have discussed a number of ways to measure vagueness in production data, specifically for the production of quantifiers. They show that a detection theoretic measure correlates with intuitive judgments of the vagueness of quantifiers.

Further information: VAAG website 

Project Leaders:

  • Manfred Krifka, Zentrum für Allgemeine Sprachwissenschaft,Typologie und Universalienforschung, Berlin, Germany
  • Ulrich Sauerland, Geisteswissenschaftliche Zentren Berlin e.V., Germany

Principal Investigators:

  • Peter Gärdenfors, Lund University, Sweden
  • Velimir Isgum, University of Zagreb, Croatia
  • Robert Van Rooij, Universiteit van Amsterdam, the Netherlands
  • Frank Veltman, Universiteit van Amsterdam, the Netherlands

Associated Partners:

  • Michael Rovatsos, University of Edinburgh, United Kingdom
  • Ewan Klein, University of Edinburgh, United Kingdom