Quick Start#

Organisation of the package#

The samplers are organised in the package based on the format of sample they produce. The module prefsampling.ordinal contains the ordinal samplers that generate rankings of the candidates. The module prefsampling.approval contains the samplers for approval preferences, the ones that generate sets of candidates.

Sample Types#

To make it easy to embed the package in all kinds of tools, we use basic Python types:

  • Ordinal samplers return collections of np.ndarray, that is, Numpy arrays where the most preferred candidate is at position 0, the next one at position 1 and so forth;

  • Approval samplers return collections of set, where each set contains the approved candidates.

In all cases, the candidates are named 0, 1, 2, ….

General Syntax#

All the sampler we provide have the same signature:

sampler(num_voters, num_candidates, **args, seed=None, **kwargs)

The parameter num_voters represents the number of samples that will be generated and the parameter num_candidates the number of candidates to consider. The seed parameter can be used to pass the seed used to defined the numpy random number generator to give you more control if needed (for replication for instance). Other parameters are specific to the samplers.

Ordinal Samplers#

Reference: prefsampling.ordinal

Sampler

**args

**kwargs

identity()

impartial()

impartial_anonymous()

mallows()

phi

central_vote (defaults to 0, 1, 2, …)
normalise_phi (defaults to False)
impartial_central_vote (defaults to False)

norm_mallows()

norm_phi

central_vote (defaults to 0, 1, 2, …)

euclidean()

num_dimensions
voters_positions
candidates_positions
voters_positions_args (defaults to dict())
candidates_positions_args (defaults to dict())

plackett_luce()

alphas

didi()

alphas

urn()

alpha

stratification()

weight

single_peaked_conitzer()

single_peaked_walsh()

single_peaked_circle()

single_crossing()

group_separable()

tree_sampler (defaults to SCHROEDER)

Approval Samplers#

Reference: prefsampling.approval

Sampler

**args

**kwargs

identity()

rel_num_approvals

empty()

full()

impartial()

p

impartial_constant_size()

rel_num_approvals

urn()

p
alpha

urn_constant_size()

rel_num_approvals
alpha

urn_partylist()

alpha

parties (required if party_votes is None)
party_votes (required if parties is None)

resampling()

phi
rel_size_central_vote
central_vote (defaults to {0, 1, 2, …})
impartial_central_vote (defaults to False)

disjoint_resampling()

phi
rel_size_central_vote
num_central_votes (defaults to None)
central_votes (see docs for the defaults)
impartial_central_votes (defaults to False)

moving_resampling()

phi
rel_size_central_vote
num_legs
central_votes (see docs for the defaults)
impartial_central_votes (defaults to False)

euclidean_threshold()

threshold
num_dimensions
voters_positions
candidates_positions
voters_positions_args (defaults to dict())
candidates_positions_args (defaults to dict())

euclidean_vcr()

voters_radius
candidates_radius
num_dimensions
voters_positions
candidates_positions
voters_positions_args (defaults to dict())
candidates_positions_args (defaults to dict())

euclidean_constant_size()

rel_num_approvals
num_dimensions
voters_positions
candidates_positions
voters_positions_args (defaults to dict())
candidates_positions_args (defaults to dict())

noise()

phi
rel_size_central_vote
distance (defaults to HAMMING)
central_votes (see docs for the defaults)
impartial_central_votes (defaults to False)

truncated_ordinal()

rel_num_approvals
ordinal_sampler
ordinal_sampler_parameters

Composition of Samplers#

It is often useful to be able to compose samplers, to define mixture for instance. The functions mixture() and concatenation() can do that.

The mixture uses different samplers, each being use with a given probability.

from prefsampling.core import mixture
from prefsampling.ordinal import single_peaked_conitzer, single_peaked_walsh, norm_mallows

# We create a mixture for 100 voters and 10 candidates of the single-peaked samplers using the
# Conitzer one with probability 0.6 and the Walsh one with probability 0.4
mixture(
    100,  # num_voters
    10,  # num_candidates
    [single_peaked_conitzer, single_peaked_walsh],  # list of samplers
    [0.6, 0.4],  # weights of the samplers
    [{}, {}]  # parameters of the samplers
)

# We create a mixture for 100 voters and 10 candidates of different Mallows' models
mixture(
    100,  # num_voters
    10,  # num_candidates
    [norm_mallows, norm_mallows, norm_mallows],  # list of samplers
    [4, 10, 3],  # weights of the samplers
    [{"norm_phi": 0.4}, {"norm_phi": 0.9}, {"norm_phi": 0.23}]  # parameters of the samplers
)

The concatenation simply concatenates the votes returned by different samplers.

from prefsampling.core import concatenation
from prefsampling.ordinal import single_peaked_conitzer, single_peaked_walsh

# We create a concatenation for 100 voters and 10 candidates. 60 votes are sampled from the
# single_peaked_conitzer sampler and 40 votes from the single_peaked_walsh sampler.
concatenation(
    [60, 40],  # num_voters per sampler
    10,  # num_candidates
    [single_peaked_conitzer, single_peaked_walsh],  # list of samplers
    [{}, {}]  # parameters of the samplers
)

Filters#

Filters are functions that operate on collections of votes and apply some random operation to them. These are the filters we have implemented:

Filter

Effect

permute_voters()

Randomly permutes the voters

rename_candidates()

Randomly rename the candidates

resample_as_central_vote()

Resamples the votes using them as central votes of sampler whose definition include a central vote (e.g., mallows() or resampling())

Below is an example of how to use the resample_as_central_vote() filter.

from prefsampling.core import resample_as_central_vote
from prefsampling.ordinal import single_crossing, norm_mallows

num_candidates = 10
initial_votes = single_crossing(100, num_candidates)

resample_as_central_vote(
    initial_votes,  # The votes
    norm_mallows,  # The sampler
    {"norm_phi": 0.4, "seed": 855, "num_candidates": num_candidates},  # The arguments for the sampler
)

Constants#

The constants used in the package are defined with respect to their corresponding samplers, see for instance EuclideanSpace or SetDistance. They are also all gathered in the prefsampling.CONSTANTS enumeration.

from prefsampling import CONSTANTS

CONSTANTS.BALL
CONSTANTS.SCHROEDER
CONSTANTS.BUNKE_SHEARER

Not that EuclideanSpace and CONSTANTS are not the same enumeration so direct comparison will fail. Indeed, CONSTANTS.BALL == EuclideanSpace.BALL is evaluated to False. However, the values are the same so CONSTANTS.BALL.value == EuclideanSpace.BALL.value is evaluated to True.