In this problem, we have a grid of oranges. Some cells are empty, some have a fresh orange and some have a rotten orange. Each turn, oranges that make contact with a rotten orange to the top, right, bottom or left, will go rotten. This feels like a simplified Conway's Game of Life situation where neighbors affect the values of cells. The problem asks how many iterations through do we need for all the fresh oranges to go rotten? If there ae fresh oranges that can never go rotten, return -1.
Initial approach
My initial brute force approach looped through every cell on every iteration, checking every adjacent value of every cell and resetting the grid with the new value. This worked, but it landed in me in the 7th percentile for runtime.
Optimized approach
There are a few key details I initially missed that allows us to dramatically cut down on complexity.
Each iteration, only the newly rotten oranges can rot more oranges. The oranges that were already rotted have affected the fresh oranges.
There's no reason to check previously rotten oranges or empty cells each iteration.
With that, this problem can be solved with a breadth first search. Initially, I scan the whole grid to get all the rotten oranges and the count of fresh oranges. Then, start with the initial rotten oranges, each round I look at the newly rotten oranges and see if there are any fresh oranges that are now rotten. Each time I find a fresh orange to make rotten, I subtract 1 from our fresh orange count. When there are no fresh oranges left, I return the count of iterations. I also keep track within each iteration whether or not I found new fresh oranges to rot, since uf the answer is ever no, I can end the process and return -1.
function orangesRotting(grid: number[][]): number {
let iterations = 0;
let currentGrid = Array.from(grid)
let rottenOranges = [];
let freshOrangeCount = 0;
// initial search
grid.forEach((row, i) => {
row.forEach((cell, j) => {
if (cell === 2) {
rottenOranges.push([i, j])
}
if (cell === 1) {
freshOrangeCount = freshOrangeCount + 1
}
})
})
if (!freshOrangeCount) {
return iterations
}
while (true) {
iterations = iterations + 1
const newRottenOranges = []
let newRotten = false;
const createNewRottenOrange = (y: number, x: number) => {
grid[y][x] = 2;
newRottenOranges.push([y, x])
freshOrangeCount = freshOrangeCount - 1;
newRotten = true;
}
for (let i = 0; i < rottenOranges.length; i++) {
const orange = rottenOranges[i]
const [y, x] = orange;
if (y > 0) {
if (grid[y - 1][x] === 1) {
createNewRottenOrange(y - 1, x);
}
}
if (y < grid.length - 1) {
if (grid[y + 1][x] === 1) {
createNewRottenOrange(y + 1, x);
}
}
if (x > 0) {
if (grid[y][x - 1] === 1) {
createNewRottenOrange(y, x - 1);
}
}
if (x < grid[0].length - 1) {
if (grid[y][x + 1] === 1) {
createNewRottenOrange(y, x + 1);
}
}
}
if (!newRotten && freshOrangeCount > 0) {
return -1
}
if (freshOrangeCount === 0) {
return iterations
}
rottenOranges = newRottenOranges
}
};
This jumps me into the 85% percentile for runtime.
Parting thoughts
It's cool to see the abstract patterns I've learned - Breadth First Search in this case - show up in concrete ways. This is also another example of a situation where I should have stop and evaluated what was true before jumping into the brute force solution, as I may have identified the optimization earlier.