A family of sets (subsets of of size ) is intersecting<\/em> if every pair of sets in have a common element.\u00a0 If then every pair of sets intersect, so can be as large as .\u00a0 If then the\u00a0Erd\u0151s\u2013Ko\u2013Rado theorem states that , which (up to relabelling of the ground set) is attained only by the star<\/em>\u00a0 of all sets containing the element .<\/p>\n A hands on proof of the\u00a0Erd\u0151s\u2013Ko\u2013Rado theorem use a tool called compression<\/em>.\u00a0 A family is left-compressed<\/em> if for every , any set obtained from by deleting an element and replacing it by a smaller one is also in .\u00a0 You can show by repeatedly applying a certain compression operator that for every intersecting family there is a left-compressed intersecting family of the same size.\u00a0 Thus it suffices to prove the\u00a0Erd\u0151s\u2013Ko\u2013Rado theorem for left-compressed families, which is easy to do by induction.<\/p>\n There is a strong stability result for large intersecting families.\u00a0 The Hilton\u2013Milner family consists of all sets that contain and at least one element of , together with itself.\u00a0 This is an intersecting family, and in fact is the largest intersecting family not contained in a star.\u00a0 The Hilton\u2013Milner family has size , so any family that gets anything like close to the\u00a0Erd\u0151s\u2013Ko\u2013Rado bound must be a subset of a star.<\/p>\n As part of an alternative proof of the Hilton\u2013Milner theorem, Peter Borg partially answered<\/a> the following question.<\/p>\n Let be an intersecting family and let .\u00a0 Let .\u00a0 For which is ?<\/em><\/p>\n Borg used that fact that this is true for to reprove the\u00a0Hilton\u2013Milner theorem.\u00a0 In Maximum hitting for sufficiently large<\/a> I completed the classification of for which this is true for large .\u00a0 The proof used the apparently new observation that, for , every maximal left-compressed intersecting family in corresponds to a unique maximal left-compressed intersecting family of .\u00a0 In particular, the number of maximal left-compressed intersecting families for is independent of .\u00a0 For there are (OEIS<\/a>) such families respectively.\u00a0 In the rest of this post I’ll explain how I obtained these numbers.<\/p>\n We want to count maximal left-compressed intersecting families of\u00a0.\u00a0 The maximal part is easy: the only way to get two disjoint sets of size from is to take a set and its complement, so we must simply choose one set from each complementary pair.\u00a0 To make sure the family we generate in this way is left-compressed we must also ensure that whenever we choose a set we must also choose every set with , where means “ can be obtained from by a sequence of compressions”.\u00a0 The compression order has the following properties.<\/p>\n Here’s one concrete algorithm.<\/p>\n The following is a fairly direct translation of this algorithm into Haskell that makes no attempt to store the families generated and just counts the number of possibilities.\u00a0 A source file with the necessary import’s and the choose function is attached to the end of this post.<\/p>\n This will compute the number of maximal left-compressed intersecting families for in a fraction of a second.\u00a0 For it would probably find the answer in less than a month.\u00a0 I obtained the value for in a couple of days on a single core by using a better representation of the sets in our family.<\/p>\n The dream is to pack all of the elements of our list into a single machine word and perform each comparison in a small number of instructions.\u00a0 For example, we could encode an element of by writing each element as binary digits then concatenating them in increasing order to obtain a 24 bit word.\u00a0 But comparing two such words as integers compares the corresponding sets lexicographically rather than pointwise.\u00a0\u00a0Edward Crane<\/a>\u00a0suggested that as the lists are so short and the elements are so small we can afford to be quite a lot more wasteful in our representation: we can write each element of our set in unary!\u00a0 The rest of this section should be considered joint work with him.<\/p>\n The first iteration of the idea is to write each element of as a string of 1’s followed by 0’s, then concatenate these strings to obtain a representation of our set.\u00a0 This representation has the great advantage that we can compare sets pointwise by comparing strings bitwise, and we can do this using very few binary operations: is contained in if and only if .<\/p>\n Unfortunately this representation uses 72 bits in total, so won\u2019t fit into a 64-bit machine word. Observing that we never use and encoding by 1\u2018s followed by 0\u2018s saves only 6 bits. But we can do even better by encoding each element of the set differently. The first element is always at least 1, the second is always at least 2 and so on. Similarly, the first element is at most 7, the second at most 8 and so on. Working through the details we arrive at the following representation.<\/p>\n Identify each element of by an \u201cup-and-right\u201d path from the bottom left to the top right corner of a grid: at the th step move right if is in your set and up if it isn\u2019t. Then if and only if the path corresponding to never goes below the path corresponding to . So we can compare sets by comparing the regions below the corresponding paths. Recording these regions can be done using 36 bits, which happily sits inside a machine word. This representation also has the helpful property that taking the complement of a set corresponds to reflecting a path about the up-and-right diagonal, so the representation of the complement of a set can be obtained by swapping certain pairs of bits followed by a bitwise NOT.<\/p>\n The value for was obtained using this new representation and the old algorithm, with one minor tweak.\u00a0 It’s a bad idea to start with a lexicographically ordered list of sets, as the early decisions will not be meaningful and not lead to much of a reduction in the length of the the lists.\u00a0 Optimal selection of which pair to decide at each stage is probably a complicated question.\u00a0 As a compromise I randomised the order of the list at the start of the process, then took the first remaining pair at each stage.<\/p>\n\n
\n
\n
r = 5\n\nsimpleOptions = [a | a <- choose r [1..(2*r-1)], not [dollar-sign] a `simpleLeftOf` (simpleComplement a)]\n\nsimpleLeftOf xs ys = all id [dollar-sign] zipWith (<=) xs ys\n\nsimpleComplement a = [1..(2*r)] \\\\ a\n\nsimpleCount [] = 1\nsimpleCount (a:as) = simpleCount take + simpleCount leave\n where\n -- take a\n -- all pairs with b < a or b^c < a are forced\n -- second case never happens as b^c has 2r but a doesn't\n take = [b | b <- as, not [dollar-sign] b `simpleLeftOf` a]\n -- leave a, and so take a^c\n -- all pairs with b < a^c or b^c < a^c (equivalently, a < b) are forced\n c = simpleComplement a\n leave = [b | b <- as, not (b `simpleLeftOf` c || a `simpleLeftOf` b)]\n<\/pre>\n