# The number of maximal left-compressed intersecting families

A family of sets (subsets of of size ) is intersecting if every pair of sets in have a common element.  If then every pair of sets intersect, so can be as large as .  If then the Erdős–Ko–Rado theorem states that , which (up to relabelling of the ground set) is attained only by the star  of all sets containing the element .

A hands on proof of the Erdős–Ko–Rado theorem use a tool called compression.  A family is left-compressed if for every , any set obtained from by deleting an element and replacing it by a smaller one is also in .  You can show by repeatedly applying a certain compression operator that for every intersecting family there is a left-compressed intersecting family of the same size.  Thus it suffices to prove the Erdős–Ko–Rado theorem for left-compressed families, which is easy to do by induction.

There is a strong stability result for large intersecting families.  The Hilton–Milner family consists of all sets that contain and at least one element of , together with itself.  This is an intersecting family, and in fact is the largest intersecting family not contained in a star.  The Hilton–Milner family has size , so any family that gets anything like close to the Erdős–Ko–Rado bound must be a subset of a star.

As part of an alternative proof of the Hilton–Milner theorem, Peter Borg partially answered the following question.

Let be an intersecting family and let .  Let .  For which is ?

Borg used that fact that this is true for to reprove the Hilton–Milner theorem.  In Maximum hitting for sufficiently large I completed the classification of for which this is true for large .  The proof used the apparently new observation that, for , every maximal left-compressed intersecting family in corresponds to a unique maximal left-compressed intersecting family of .  In particular, the number of maximal left-compressed intersecting families for is independent of .  For there are such families respectively.  In the rest of this post I’ll explain how I obtained these numbers.

We want to count maximal left-compressed intersecting families of .  The maximal part is easy: the only way to get two disjoint sets of size from is to take a set and its complement, so we must simply choose one set from each complementary pair.  To make sure the family we generate in this way is left-compressed we must also ensure that whenever we choose a set we must also choose every set with , where means “ can be obtained from by a sequence of compressions”.  The compression order has the following properties.

• If and then if and only if for each .
• if and only if .

Here’s one concrete algorithm.

1. Generate a list of all sets from .  This list has one set from each complementary pair.
2. Put all from the list with into .  (These sets must be in every maximal left-compressed intersecting family.)  Note that we never have as contains but doesn’t.
3. Let be the first element of the list and branch into two cases depending on whether we take or .
• If we take , also take all from the list with and for all from the list with . (Since contains and doesn’t, the second condition will never actually trigger.)
• If we take , also take all from the list with and for all from the list with . It is cheaper to test than the second condition to avoid taking complements.
4. Repeat recursively on each of the two lists generated in the previous step.  Stop on each branch whenever the list of remaining options is empty.

The following is a fairly direct translation of this algorithm into Haskell that makes no attempt to store the families generated and just counts the number of possibilities.  A source file with the necessary import’s and the choose function is attached to the end of this post.

r = 5

simpleOptions = [a | a <- choose r [1..(2*r-1)], not [dollar-sign] a simpleLeftOf (simpleComplement a)]

simpleLeftOf xs ys = all id [dollar-sign] zipWith (<=) xs ys

simpleComplement a = [1..(2*r)] \\ a

simpleCount [] = 1
simpleCount (a:as) = simpleCount take + simpleCount leave
where
-- take a
-- all pairs with b < a or b^c < a are forced
-- second case never happens as b^c has 2r but a doesn't
take = [b | b <- as, not [dollar-sign] b simpleLeftOf a]
-- leave a, and so take a^c
-- all pairs with b < a^c or b^c < a^c (equivalently, a < b) are forced
c = simpleComplement a
leave = [b | b <- as, not (b simpleLeftOf c || a simpleLeftOf b)]


This will compute the number of maximal left-compressed intersecting families for in a fraction of a second.  For it would probably find the answer in less than a month.  I obtained the value for in a couple of days on a single core by using a better representation of the sets in our family.

The dream is to pack all of the elements of our list into a single machine word and perform each comparison in a small number of instructions.  For example, we could encode an element of by writing each element as binary digits then concatenating them in increasing order to obtain a 24 bit word.  But comparing two such words as integers compares the corresponding sets lexicographically rather than pointwise.  Edward Crane suggested that as the lists are so short and the elements are so small we can afford to be quite a lot more wasteful in our representation: we can write each element of our set in unary!  The rest of this section should be considered joint work with him.

The first iteration of the idea is to write each element of as a string of 1’s followed by 0’s, then concatenate these strings to obtain a representation of our set.  This representation has the great advantage that we can compare sets pointwise by comparing strings bitwise, and we can do this using very few binary operations: is contained in if and only if .

Unfortunately this representation uses 72 bits in total, so won’t fit into a 64-bit machine word. Observing that we never use and encoding by 1‘s followed by 0‘s saves only 6 bits. But we can do even better by encoding each element of the set differently. The first element is always at least 1, the second is always at least 2 and so on. Similarly, the first element is at most 7, the second at most 8 and so on. Working through the details we arrive at the following representation.

Identify each element of by an “up-and-right” path from the bottom left to the top right corner of a grid: at the th step move right if is in your set and up if it isn’t. Then if and only if the path corresponding to never goes below the path corresponding to . So we can compare sets by comparing the regions below the corresponding paths. Recording these regions can be done using 36 bits, which happily sits inside a machine word. This representation also has the helpful property that taking the complement of a set corresponds to reflecting a path about the up-and-right diagonal, so the representation of the complement of a set can be obtained by swapping certain pairs of bits followed by a bitwise NOT.

The value for was obtained using this new representation and the old algorithm, with one minor tweak.  It’s a bad idea to start with a lexicographically ordered list of sets, as the early decisions will not be meaningful and not lead to much of a reduction in the length of the the lists.  Optimal selection of which pair to decide at each stage is probably a complicated question.  As a compromise I randomised the order of the list at the start of the process, then took the first remaining pair at each stage.

The Haskell source is here.  There are a few more performance tricks to do with the exact bit representation of the sets, which I’m happy to discuss if anything is unclear.

# Random Structures and Algorithms 2017

A partial, chronologically ordered, list of talks I attended at RSA in Gniezno, Poland. Under construction until the set of things I can remember equals the set of things I’ve written about.

### Shagnik Das

A family of subsets of that shatters a -set has at least elements. How many -sets can we shatter with a family of size ? A block construction achieves . (Random is much worse.) Can in fact shatter constant fraction of all -sets. When , identify the ground set with , and colour by for .

Claim. A -set is shattered if and only if it is a basis for .

Proof. First suppose that is a basis. Then for any sequence , there is a unique vector such that . (We are just solving a system of full rank equations mod 2.)

Next suppose that are linearly dependent; that is, that they are contained in a subspace of . Choose orthogonal to . Then for any and any we have , so two of our colourings agree on .

We finish with the observation that random sets of vectors are fairly likely to span : the probability is

Blowing up this colouring gives a construction that works for larger .

At the other end of the scale, we can ask how large a family is required to shatter every -set from . The best known lower bound is , and the best known upper bound is , which comes from a random construction. Closing the gap between these bounds, or derandomising the upper bound, would both be of significant interest.

### Andrew McDowell

At the start of his talk in Birmingham earlier this summer, Peter Hegarty played two clips from Terminator in which a creature first dissolved into liquid and dispersed, then later reassembled, stating that it had prompted him to wonder how independent agents can meet up without any communication. Andrew tackled the other half of this question: how can non-communicating agents spread out to occupy distinct vertices of a graph? He was able to analyse some strategies using Markov chains in a clever way.

### Tássio Naia

A sufficient condition for embedding an oriented tree on vertices into every tournament on vertices that implies that almost all oriented trees are unavoidable in this sense.

# Partition regularity and other combinatorial problems

This is the imaginative title of my PhD thesis.  It contains four unrelated pieces of work.  (I was warned off using this phrasing in the thesis itself, where the chapters are instead described as “self-contained”.)

The first and most substantial concerns partition regularity.  It is a coherent presentation of all of the material from Partition regularity in the rationals, Partition regularity with congruence conditions and Partition regularity of a system of De and Hindman.

The remaining three chapters are expanded versions of Maximum hitting for sufficiently large, Random walks on quasirandom graphs and A note on balanced independent sets in the cube.1

1 “A note … ” was fairly described by one of my examiners as a “potboiler”.  It was also my first submitted paper, and completed in my second of three years.  Perhaps this will reassure anybody in the first year of a PhD who is worrying that they have yet to publish.

# Maximum hitting for n sufficiently large

Borg asked what happens to the Erdős-Ko-Rado theorem if we only count sets meeting some fixed set , and answered the question for , the size of the sets in the set family. This paper answers the question for , provided , the size of the ground set, is sufficiently large.

There is a typo in the proof of Theorem 4 in the journal version. The line beginning “By Lemma 9, has size polynomial in …” should read “By Lemma 9, …”. Thanks to Candida Bowtell for spotting this.