Solving a logic grid puzzle with integer programming

Modeling a logic puzzle as a constraint satisfaction problem with OMPR

Mikhail Popov https://mpopov.com
2019-05-19

Table of Contents


Introduction

A logic grid puzzle involves:

Setting it up as a grid of matrices helps with solving the puzzle. Then once all the knowns are marked using the clues, we can proceed by elimination and deduction.

An example of a 3-variable grid for solving a logic puzzle.

Figure 1: An example of a 3-variable grid for solving a logic puzzle.

Table 1: The solution table which is filled out as combinations are figured out.
Variable 1 Variable 2 Variable 3
A 1
B
C

OMPR (Optimization Modelling Package)(Schumacher 2018a) is a package for R(R Core Team 2018) which provides a language for modeling and solving mixed integer linear programs. Once a model is specified, we can try solving it using the ompr.roi(Schumacher 2018b) interface to R Optimization Infrastructure(Hornik et al. 2019) and a solver (e.g. GLPK via ROI.plugin.glpk(Theussl 2017)).

One may model a Sudoku puzzle as a constraint satisfaction problem using OMPR1. Similarly, I decided to try coding a logic grid puzzle as an integer program to be able to solve it automatically. This post isn’t really about logic grid puzzles and it’s not really about OMPR2, it’s really about formalizing a problem into math and then expressing that math as code.

Coding examples


library(magrittr)
library(ompr)
library(ompr.roi)
library(ROI.plugin.glpk)

We will specify our model by breaking up the overall logic grid into sub-grids

and placing constraints on its \(n\) rows and \(n\) columns. For example, the first grid – containing variable 1 (as rows \(i = 1, \ldots, n\)) and variable 2 (as columns \(j = 1, \ldots, n\)) – is \(x^{12}\) and the constraints are specified as follows:

\[\begin{aligned} x^{12}_{i,j} \in \{0, 1\} & ~\mathrm{for~all}~i,j = 1, \ldots, n\\ \sum_{i = 1}^n x^{12}_{i,j} = 1 & ~\mathrm{for~all}~j = 1, \ldots, n\\ \sum_{j = 1}^n x^{12}_{i,j} = 1 & ~\mathrm{for~all}~i = 1, \ldots, n \end{aligned}\]

which constrains each column and each row to contain at most one 1.

3-variable model specification

In the first example we will solve the very easy puzzle “Basic 3” from Brainzilla. In it, 4 people (Amanda, Jack, Mike, and Rachel) traveled to 4 destinations (London, Rio de Janeiro, Sydney, and Tokyo) between 2013 and 2016. Our goal is to figure out who travelled where and when based on some clues to get us started:

  1. Neither Amanda nor Jack traveled in 2015.
  2. Mike didn’t travel to South America.
  3. Rachel traveled in 2014.
  4. Amanda visited London.
  5. Neither Mike nor Rachel traveled to Japan.
  6. A man traveled in 2016.

and the rest we have to deduce. We start by building a MILP Model with the grids as variables, a set of constraints on those variables, and no objective function to optimize:


n <- 4

model3 <- MILPModel() %>%
  # 3 grids:
  add_variable(x12[i, j], i = 1:n, j = 1:n, type = "binary") %>%
  add_variable(x13[i, j], i = 1:n, j = 1:n, type = "binary") %>%
  add_variable(x32[i, j], i = 1:n, j = 1:n, type = "binary") %>%
  
  # No objective:
  set_objective(0) %>%
  
  # Only one cell can be assigned in a row:
  add_constraint(sum_expr(x12[i, j], j = 1:n) == 1, i = 1:n) %>%
  add_constraint(sum_expr(x13[i, j], j = 1:n) == 1, i = 1:n) %>%
  add_constraint(sum_expr(x32[i, j], j = 1:n) == 1, i = 1:n) %>%
  
  # Only one cell can be assigned in a column:
  add_constraint(sum_expr(x12[i, j], i = 1:n) == 1, j = 1:n) %>%
  add_constraint(sum_expr(x13[i, j], i = 1:n) == 1, j = 1:n) %>%
  add_constraint(sum_expr(x32[i, j], i = 1:n) == 1, j = 1:n)

Finally, we want to encode deductions like

\[ \mathrm{if}~x^{12}_{i,j} = 1~\mathrm{and}~x^{13}_{i,k} = 1,~\mathrm{then}~x^{32}_{k,j} = 1 \]

One way to do this would be setting a constraint:

\[ x^{12}_{i,j} \times x^{13}_{i,k} = x^{32}_{k,j} \]

because for \(x^{32}_{k,j}\) to be 1, both \(x^{12}_{i,j}\) and \(x^{13}_{i,k}\) would have to be 1. If either is 0 (false), then it follows that \(x^{32}_{k,j}\) must also be 0.

I’ll be honest, dear reader, I have no idea how to code this3. On a whim, I tried the following and it worked:


model3 <- model3 %>%
  add_constraint(x12[i, j] + x13[i, k] <= x32[k, j] + 1, i = 1:n, j = 1:n, k = 1:n)

I reasoned that the only way for x32[k, j] to be 1 under this constraint would be for x12[i, j] and x13[i, k] to both be 1, which is what we want. That is, it prevents x32[k, j] from being 0 if x12[i, j] and x13[i, k] are both 1. It’s not perfect, but it works???

Now we are ready to translate the clues into additional constraints:


model3_fixed <- model3 %>%
  # 1. Neither Amanda (1) nor Jack (2) traveled in 2015 (3):
  add_constraint(x12[i, 3] == 0, i = 1:2) %>%
  # 2. Mike (3) didn't travel to South America (2):
  add_constraint(x13[3, 2] == 0) %>%
  # 3. Rachel (4) traveled in 2014 (2):
  add_constraint(x12[4, 2] == 1) %>%
  # 4. Amanda (1) visited London (1):
  add_constraint(x13[1, 1] == 1) %>%
  # 5. Neither Mike (3) nor Rachel (4) traveled to Japan (4):
  add_constraint(x13[i, 4] == 0, i = c(3, 4)) %>%
  # 6. A man (2 or 3 => neither 1 nor 4) traveled in 2016 (4):
  add_constraint(x12[i, 4] == 0, i = c(1, 4))

result3 <- solve_model(model3_fixed, with_ROI(solver = "glpk", verbose = TRUE))

<SOLVER MSG>  ----
GLPK Simplex Optimizer, v4.65
97 rows, 48 columns, 297 non-zeros
      0: obj =  -0.000000000e+00 inf =   2.600e+01 (26)
     39: obj =  -0.000000000e+00 inf =   0.000e+00 (0)
OPTIMAL LP SOLUTION FOUND
GLPK Integer Optimizer, v4.65
97 rows, 48 columns, 297 non-zeros
48 integer variables, all of which are binary
Integer optimization begins...
Long-step dual simplex will be used
+    39: mip =     not found yet <=              +inf        (1; 0)
+    39: >>>>>   0.000000000e+00 <=   0.000000000e+00   0.0% (1; 0)
+    39: mip =   0.000000000e+00 <=     tree is empty   0.0% (0; 1)
INTEGER OPTIMAL SOLUTION FOUND
<!SOLVER MSG> ----

# For changing indices to labels:
Is <- c("Amanda", "Jack", "Mike", "Rachel")
Js <- c("2013", "2014", "2015", "2016")
Ks <- c("London", "Rio de Janeiro", "Sydney", "Tokyo")

# Extracting the solution to each grid:
solution3 <- list(
  x12 = get_solution(result3, x12[i, j]) %>%
    dplyr::filter(value == 1) %>%
    dplyr::select(-c(variable, value)) %>%
    dplyr::mutate(i = Is[i], j = Js[j]),
  x13 = get_solution(result3, x13[i, k]) %>%
    dplyr::filter(value == 1) %>%
    dplyr::select(-c(variable, value)) %>%
    dplyr::mutate(i = Is[i], k = Ks[k]),
  x32 = get_solution(result3, x32[k, j]) %>%
    dplyr::filter(value == 1) %>%
    dplyr::select(-c(variable, value)) %>%
    dplyr::mutate(k = Ks[k], j = Js[j])
)

# Obtained solution:
solution3 %>%
  purrr::reduce(left_join) %>%
  dplyr::arrange(i) %>%
  kable(col.names = c("Person", "Year", "Destination"), align = "l") %>%
  kable_styling(bootstrap_options = c("striped"))
Person Year Destination
Amanda 2013 London
Jack 2016 Tokyo
Mike 2015 Sydney
Rachel 2014 Rio de Janeiro

…which should match the solution we would obtain by elimination:

Person Year Destination
Amanda 2013 London
Jack 2016 Tokyo
Mike 2015 Sydney
Rachel 2014 Rio de Janeiro

4-variable model spec with relative constraints

In this second example, we will solve the medium-difficulty puzzle “Agility Competition” from Brainzilla in which four dogs took part in an agility competition. Our goal is to figure out the combination of: the dog’s name, the breed, the task they were best in, and which place they ranked in. We are given the following clues:

  1. Only the winning dog has the same initial letter in name and breed.
  2. The Boxer ranked 1 position after the Shepherd. None of them likes the tunnel, nor jumping through the tire.
  3. Cheetah and the dog who loves the poles were 1st and 3rd.
  4. Thor doesn’t like the plank and didn’t come 2nd.
  5. Cheetah either loves the tunnel or she came 4th.
  6. The dog who loves the plank came 1 position after the dog who loves the poles.
  7. Suzie is not a Shepherd and Beany doesn’t like the tunnel

This is the map we will use to refer to the grids:

Figure from https://www.brainzilla.com, modified to show grid mapping

Figure 2: Figure from https://www.brainzilla.com, modified to show grid mapping

First, we will add the extra grids and place the row-column constraints on them as we did before. Because of the way we have chosen to write our model, extending it to 4 variables (which yields 3 additional grids) means we can build on what we already have:


model4 <- model3 %>%
  # 3 additional grids:
  add_variable(x14[i, j], i = 1:n, j = 1:n, type = "binary") %>%
  add_variable(x42[i, j], i = 1:n, j = 1:n, type = "binary") %>%
  add_variable(x43[i, j], i = 1:n, j = 1:n, type = "binary") %>%
  
  # Only one cell can be assigned in a row, as before:
  add_constraint(sum_expr(x14[i, j], j = 1:n) == 1, i = 1:n) %>%
  add_constraint(sum_expr(x42[i, j], j = 1:n) == 1, i = 1:n) %>%
  add_constraint(sum_expr(x43[i, j], j = 1:n) == 1, i = 1:n) %>%
  
  # Only one cell can be assigned in a column, as before:
  add_constraint(sum_expr(x14[i, j], i = 1:n) == 1, j = 1:n) %>%
  add_constraint(sum_expr(x42[i, j], i = 1:n) == 1, j = 1:n) %>%
  add_constraint(sum_expr(x43[i, j], i = 1:n) == 1, j = 1:n) %>%

  # Additional logic:
  add_constraint(x12[i, j] + x14[i, k] <= x42[k, j] + 1, i = 1:n, j = 1:n, k = 1:n) %>%
  add_constraint(x13[i, j] + x14[i, k] <= x43[k, j] + 1, i = 1:n, j = 1:n, k = 1:n) %>%
  add_constraint(x42[i, j] + x43[i, k] <= x32[k, j] + 1, i = 1:n, j = 1:n, k = 1:n)

With the model specified, we can now translate the clues into constraints. You might recall that one of the clues is “The Boxer (1) ranked 1 position after the Shepherd (3)”, which we can’t specify in a fixed way. The key here will be to utilize the \(x\) variables as indicator variables to fetch4 the indices from \(x^{42}\) (the grid with positions as rows and breeds as columns):

\[\begin{aligned} \mathrm{Boxer~position} & = \sum_{i = 1}^n i \times x^{42}_{i,1}\\ \mathrm{Shepherd~position} & = \sum_{i = 1}^n i \times x^{42}_{i,3}\\ \mathrm{Boxer~~position} & = \mathrm{Shepherd~position} + 1 \end{aligned}\]


model4_fixed <- model4 %>%
  # 2. The Boxer (1) ranked 1 position after the Shepherd (3):
  add_constraint(x42[1, 1] == 0) %>% # Boxer cannot be 1st
  add_constraint(x42[4, 3] == 0) %>% # Shepherd cannot be 4th
  add_constraint(sum_expr(i * x42[i, 1], i = 1:4) == sum_expr(i * x42[i, 3], i = 1:4) + 1) %>%
  #    None of them likes the tunnel (4), nor jumping through the tire (3):
  add_constraint(x32[i, 1] == 0, i = 3:4) %>% # Boxer's dislikes
  add_constraint(x32[i, 3] == 0, i = 3:4) %>% # Shepherd dislikes
  
  # 3. Cheetah (2) and the dog who loves the poles (2) were 1st and 3rd:
  add_constraint(x13[2, 2] == 0) %>% # Cheetah cannot like poles
  add_constraint(x14[2, 1] == 1) %>% # Cheetah is in 1st position
  add_constraint(x43[3, 2] == 1) %>% # Dog who loves poles is in 3rd
  
  # 1. Only the winning dog has the same initial letter in name and breed:
  add_constraint(x12[2, 2] == 1) %>% # Therefore Cheetah is a Collie
  add_constraint(x12[1, 1] == 0) %>% # and Beany cannot be a Boxer
  add_constraint(x12[3, 4] == 0) %>% # and Thor cannot be a Terrier
  add_constraint(x12[4, 3] == 0) %>% # and Suzie cannot be a Shepherd
  
  # 4. Thor (3) doesn't like the plank (1) and didn't come 2nd:
  add_constraint(x13[3, 1] == 0) %>%
  add_constraint(x14[3, 2] == 0) %>%
  
  # 5. Cheetah (2) either loves the tunnel (4) or she came 4th:
  add_constraint(x13[2, 4] == 1) %>% # Since Cheetah came 1st, she must love the tunnel
  
  # 6. The dog who loves the plank (1) came 1 position after the dog who loves the poles (2):
  add_constraint(x42[1, 1] == 0) %>% # Plank cannot be 1st
  add_constraint(x42[4, 2] == 0) %>% # Poles cannot be 4th
  add_constraint(sum_expr(i * x43[i, 1], i = 1:4) == sum_expr(i * x43[i, 2], i = 1:4) + 1) %>%
  
  # 7. Suzie (4) is not a Shepherd (3) and Beany (1) doesn't like the tunnel (4):
  add_constraint(x12[4, 3] == 0)
  # We don't need anymore constraints because tunnel is already claimed by Cheetah

result4 <- solve_model(model4_fixed, with_ROI(solver = "glpk", verbose = TRUE))

<SOLVER MSG>  ----
GLPK Simplex Optimizer, v4.65
325 rows, 96 columns, 995 non-zeros
      0: obj =  -0.000000000e+00 inf =   5.400e+01 (54)
     78: obj =  -0.000000000e+00 inf =   9.437e-15 (0)
OPTIMAL LP SOLUTION FOUND
GLPK Integer Optimizer, v4.65
325 rows, 96 columns, 995 non-zeros
96 integer variables, all of which are binary
Integer optimization begins...
Long-step dual simplex will be used
+    78: mip =     not found yet <=              +inf        (1; 0)
+    83: >>>>>   0.000000000e+00 <=   0.000000000e+00   0.0% (1; 1)
+    83: mip =   0.000000000e+00 <=     tree is empty   0.0% (0; 3)
INTEGER OPTIMAL SOLUTION FOUND
<!SOLVER MSG> ----

Is <- c("Beany", "Cheetah", "Thor", "Suzie") # name
Js <- c("Boxer", "Collie", "Shepherd", "Terrier") # breed
Ks <- c("Plank", "Poles", "Tire", "Tunnel") # best at
Ls <- as.character(1:4) # ranking

# Extract solved grids:
solution4 <- list(
  x12 = get_solution(result4, x12[i, j]) %>%
    dplyr::filter(value == 1) %>%
    dplyr::select(-c(variable, value)) %>%
    dplyr::mutate(i = Is[i], j = Js[j]),
  x13 = get_solution(result4, x13[i, k]) %>%
    dplyr::filter(value == 1) %>%
    dplyr::select(-c(variable, value)) %>%
    dplyr::mutate(i = Is[i], k = Ks[k]),
  x14 = get_solution(result4, x14[i, l]) %>%
    dplyr::filter(value == 1) %>%
    dplyr::select(-c(variable, value)) %>%
    dplyr::mutate(i = Is[i], l = Ls[l]),
  x42 = get_solution(result4, x42[l, j]) %>%
    dplyr::filter(value == 1) %>%
    dplyr::select(-c(variable, value)) %>%
    dplyr::mutate(l = Ls[l], j = Js[j]),
  x43 = get_solution(result4, x43[l, k]) %>%
    dplyr::filter(value == 1) %>%
    dplyr::select(-c(variable, value)) %>%
    dplyr::mutate(l = Ls[l], k = Ks[k]),
  x32 = get_solution(result4, x32[k, j]) %>%
    dplyr::filter(value == 1) %>%
    dplyr::select(-c(variable, value)) %>%
    dplyr::mutate(k = Ks[k], j = Js[j])
)

# Obtained solution:
solution4 %>%
  purrr::reduce(left_join) %>%
  dplyr::arrange(i) %>%
  kable(col.names = c("Dog", "Breed", "Best", "Position"), align = "l") %>%
  kable_styling(bootstrap_options = c("striped"))
Dog Breed Best Position
Beany Terrier Tire 2
Cheetah Collie Tunnel 1
Suzie Boxer Plank 4
Thor Shepherd Poles 3

…which should match the solution we would obtain by deduction:

Dog Breed Best Position
Beany Terrier Tire 2
Cheetah Collie Tunnel 1
Suzie Boxer Plank 4
Thor Shepherd Poles 3

Further reading and other software

OMPR was inspired by JuMP package, part of JuliaOpt family of optimization packages for Julia. If you are interested in mathematical modeling and optimization in Python, check out Pyomo. (And its documentation.)

This post from Jeff Schecter is an insightful introduction to constrained optimization (using Pyomo) with the application of matching clients with stylists at Stitch Fix. By the way, I highly recommend subscribing to MultiThreaded blog for really interesting and informative articles about technology and data science being used at Stitch Fix.

P.S. If you liked this post, you may be interested in my tutorial on Bayesian optimization in R.

Acknowledgements

You might have noticed a few blue links with “W”s on this page. Those are links to the Wikipedia articles on those topics and if you hover over them, you will see a preview of the article. This is possible with the ContextCards library developed by my coworker Joaquin over at Wikimedia, based on the Popups extension for MediaWiki.

Hornik, Kurt, David Meyer, Florian Schwendinger, and Stefan Theussl. 2019. ROI: R Optimization Infrastructure. https://CRAN.R-project.org/package=ROI.

R Core Team. 2018. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/.

Schumacher, Dirk. 2018a. Ompr: Model and Solve Mixed Integer Linear Programs. https://CRAN.R-project.org/package=ompr.

———. 2018b. Ompr.roi: A Solver for ’Ompr’ That Uses the R Optimization Infrastructure (’Roi’). https://CRAN.R-project.org/package=ompr.roi.

Theussl, Stefan. 2017. ROI.plugin.glpk: ’ROI’ Plug-in ’Glpk’. https://CRAN.R-project.org/package=ROI.plugin.glpk.


  1. See this article

  2. For that I would recommend these articles in its documentation.

  3. add_constraint(model3, x12[i, j] * x13[i, k] == x32[k, j], i = 1:n, j = 1:n, k = 1:n), while intuitive, results in the following error: Error in x12[i, j] * x13[i, k] : non-numeric argument to binary operator

  4. pun intended :D

Corrections

If you see mistakes or want to suggest changes, please create an issue on the source repository.

Reuse

Text and figures are licensed under Creative Commons Attribution CC BY-ND 4.0. Source code is available at https://github.com/bearloga/logic-puzzle-ompr, unless otherwise noted. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".