Latest Event Updates

Project Euler: a list of interesting problems

Image Posted on Updated on


If you are not aware a website called Project Euler has hundreds of algorithmic problems. Despite that most of them are related to math it’s a good resource to warm up/train your brain in coding. You can use any programming language that you want and track progress.

Here’s a list of interesting Euler’s problems in terms of diversity from my point of view with the aim to improve not only math but also programming skils (No/ Problem Titile):

11 – Largest product in a grid
12 – Highly divisible triangular number
15 – Lattice paths
24 – Lexicographic permutations
54 – Poker hands
59 – XOR decryption
62 – Cubic permutations
67 – Maximum path sum II
68 – Magic 5-gon ring
78 – Coin partitions
79 – Passcode derivation
81 – Path sum: two ways
86 – Cuboid route
94 – Almost equilateral triangles
96 – Sudoku
100 – Arranged probability
107 – Minimal network
109 – Darts
114 – Counting block combinations I
115 – Counting block combinations II
116 – Red, green or blue tiles
117 – Red, green, and blue tiles
145 – How many reversible numbers are there below one-billion?
148 – Exploring Pascal’s triangle
150 – Searching a triangular array for a sub-triangle having minimum-sum
154 – Exploring Pascal’s pyramid
165 – Intersections
166 – Criss Cross
181 – Investigating in how many ways objects of two different colours can be grouped
182 – RSA encryption
186 – Connectedness of a network
194 – Coloured Configurations
208 – Robot Walks
209 – Circular Logic
232 – The Race
267 – Billionaire
275 – Balanced Sculptures
280 – Ant and seeds

Note that it is highly recommended to solve all Euler’s problems one by one because solving a previous problem has a clue to the next ones.

Command And Query Responsibility Segregation and Event Sourcing: What you should think about in advance

Posted on Updated on

After quite a while this article has finally come out! I started to write it about 6 months ago but because of many factors in my life I procrastinated. This text is mainly devoted to those who want to see major pros and cons of this model. You won’t find here the nuts and bolts however. For this, move to the reference section where listed really cool URLs for the start. Further the pattern will be named as CQRS/ES for simplicity.

2 main goals of the pattern

Some developers consider that this pattern is solely used for scalability reasons forgetting that one greatly reduces data model complexity.

Events and snapshots – no removal but always append

it is well known that append-only architectures distribute more easily than updating architectures because there are far fewer locks to deal with. Append-only operations require no locks on a database-side which increases both performance and scalability.

CQRS + Event Model

One of the largest issues when using Event Sourcing is that you cannot ask the system a query such as “Give me all users whose first names are ‘Greg’”. This is due to not having a representation of current state.

With Event Sourcing the event model is also the persistence model on the Write side. This drastically lowers costs of development as no conversion between the models is needed.

Synchronizing write and read sides

The choice of integration model though is very important as translation and synchronization between models can be become a very expensive undertaking. The model that is best suited is the introduction of events, events are a well known integration pattern and offer the best mechanism for model synchronization.

CQRS suggests using 2 different databases: one optimized for reading (denormalized) and another for writing. It is a very tempting way to designing systems. The only big problem with this solution is that you need somehow to sync up your 2 data storages.

You must definitely have a mechanism to recreate a read-side from the write-side. You might need to do this if the read side data store got out of synchronization for some reason, or because you needed to modify the structure of the read-side data store to support a new query. Remember that the CQRS pattern does not mandate that you use different stores on the read side and write side. You could decide to use a single relational store with a schema in third normal form and a set of denormalized views over that schema. However, replaying events is a very convenient mechanism for resynchronizing the read-side data store with the write-side data store.

Storage technology replaceability

CQRS/ES makes it easy to change your technologies. For example, you could start with a file-based event store for prototyping and development, and later switch to a Windows Azure table-based store for production.

Relational vs NoSQL solutions and the cost of maintenance – think twice

Let’s mention major issues in case of using a relational database as a technology for this model.

Pitfall 1: Limit in number of columns per table

Read-side is always kept in a denormalized form for performance reasons and usually requires a huge number of columns. If you have e.g. Oracle 11g database and hit a maximum number of 1000 columns per table you won’t be able to add more columns.

Pitfall 2: Adding new columns

In many cases in order to add new data to the domain model you must release a bunch of applications/services along with changing a schema of a database on the read-side, especially when a column is not empty but already has some data. Here, I prefer a NoSQL databases such as MongoDB with relaxed schema that doesn’t require to change one (data are kept in JSON documents) thus you won’t have to change the schema of a database at all! CQRS/ES patterns with NoSQL db might have a simple Event Storage for the write-side with a blob-column in which data are represented as history and read-side as a schema-less NoSQL db.

Eventual consistency – think twice

While not required, it is common to use messaging between the write and read sides. This means that the system will be in an inconsistent state from time to time. This pattern is not a fit for e.g. financial transactions where a strong consistency is required between 2 sides. However, in majority of cases strong consistency can be used only on the write-side whereas the read-side is used for querying and reporting.

Simple vs costly

Events are simple objects that describe what has happened in the system. By simply saving events, you are avoiding the complications associated with saving complex domain objects to a relational store; namely, the object-relational impedance mismatch.

The event based model may be slightly more costly due to the need of definition of events but this cost is relatively low and it also offers a smaller Impedance Mismatch to bridge which helps to make up for the cost. The event based model also offers all of the benefits discussed in “Events” that also help to reduce the overall initial cost of the creation of the events. That said the CQRS and Event Sourcing model is actually less expensive in most cases. That said the CQRS and Event Sourcing model is actually less expensive in most cases.


As long as you have a stream of events, you can project it to any form, even a conventional SQL database. For instance, my favorite approach is to project event streams into JSON documents stored in a cloud storage.

Also a great benefit of the separate read layer is that it will not suffer from an impedance mismatch - It is connected directly to the data model, this can make queries much easier to optimize. On the write side you no longer need to worry about how your locks impact queries, and on the read side your database can be read-only.



Because events are immutable, you can use an append-only operation when you save them. Events are also simple, standalone objects. Both these factors can lead to better performance and scalability for the system than approaches that use complex relational storage models. On top of this events reduce network volumes unlike full snapshots in case of a pure CQRS model.


A normalized database schema can fail to provide adequate response times because of the excessive table JOIN operations. Despite advances in relational database technology, a JOIN operation is still very expensive compared to a single-table read.


Command: In most systems, especially web systems, the Command side generally processes a very small number of transactions as a percentage of the whole. Scalability therefore is not always important.

Query: In most systems, especially web systems, the Query side generally processes a very large number of transactions as a percentage of the whole (often times 2 or more orders of magnitude). Scalability is most often needed for the query side.

Distributed development teams

The architecture can be viewed as three distinct decoupled areas. The first is the client; it consumes DTOs and produces Commands. The second is the domain; it consumes commands and produces events. The third is the Read Model; it consumes events and produces DTOs. The decoupled nature of these three areas can be extremely valuable in terms of team characteristics. Instead of working in vertical slices the team can be working on three concurrent vertical slices, the client, the domain, and the read model. This allows for a much better scaling of the number of developers working on a project as since they are isolated from each other they cause less conflict when making changes. It would be reasonable to nearly triple a team size without introducing a larger amount of conflict due to not needing to introduce more communication for the developers.


There is no one right way to implement CQRS. It is not recommended to use this model for critical projects of course if you have no experience with one. Note that It is not possible to create an optimal solution for searching, reporting, and processing transactions utilizing a single model. 


Introduction to Domain Driven Design, CQRS and Event Sourcing

Command-Query-Responsibility-Segregation (CQRS) Makes Domain Models Practical

CQRS and Event Sourcing

CQRS – Did you mean CARS?

A CQRS and ES Deep Dive

Clarified CQRS




Distributed transactions and scalability issues in large-scale distributed systems

Image Posted on Updated on

Distributed transactions is the main evil of scalability

It is very hard to scale distributed transactions to an extremely high level, moreover they reduce throughput. Unlike a transaction on a local database, a distributed transaction involves altering data on multiple nodes. It can be a database + JMS broker or just a set of different databases. As an example let’s recall a classical 2-phase commit (2PC later) – a type of atomic commitment protocol in a back-end service with high volume of transactions. This protocol provides ACID-like properties for global transaction processing. I won’t go into details how it works under the hood, I’ll just tell you that C (Consistency) from ACID is the main evil of  scalability in distributed systems. It puts a great burden due to its complex coordination algorithm.  Overall throughput can drop up to a few times. Locks in all of the data sources are being held during 2PC. The longer duration locks create the risk of higher contention. The 2PC coordinator also represents a Single Point of Failure, which is unacceptable for critical systems.  For systems that have reasonably high volume of messages, or sensitive SLAs, it’s worth giving up strong consistency for throughput.

But how can I live without distributed transactions to achieve higher scalability?

Algorithms such as 2PC use “Exactly Once” technique whereas we will use “At least Once” technique. The difference is that a developer should take care of that in his application code to cope with it. Most queueing technologies provide acknowledgements that a message has been accepted (handling is a separate deal). Databases use local transactions. We can deal with downstream failures without coordination. Read on!

Idempotence and fault tolerance

From math, idempotence is as simple as that:


That is, the result stays the same, no matter how many times a function gets called on the same argument. In distributed world Idempotence implies that an operation can be invoked repeatedly without changing the result. Why do I need one? Because we should somehow resolve processing duplicate requests in case of a system failure. Let’s make it clear by considering an example. A client-appliction sends a financial transaction to a server (there might be a cluster of them load-balanced or just one) and waits for acknowledgment. For some reason, at this particular time:

  • A server goes down or
  • Client goes down or
  • Network failure happens

In all of these 3 cases, a client-app didn’t get an acknowledgment message (reply) from the server about a transaction status. Of course, the client then should retry this transaction. The server must ensure that this financial transaction is accomplished “At least Once”. Here comes to the rescue idempotence. The server must remember a state – that a transaction with this Id has already been processed including saved acknowledgement message in order to check that it exists and reply with its acknowledgement message in case it does. We don’t have expensive distributed transactions anymore – “At least Once” is a more relaxed and scalable approach. That is, instead of locking resources everywhere, we can assume that messages will arrive at least once.

Optimistic locking

Even though this technique is quite old, one goes well with idempotence. If two people are trying to affect change to the same entity at the same time we don’t lock database records, rather we use a concept of versioning and optionally uniqueness. The idea is to save a version of each entity record in the database but to make sure before saving it wasn’t changed. A simple example is a self-service kiosk where people check-in before boarding at the airport. They can select a vacant seat from the seat map.


Each seat has a version = 1. When multiple people make their choice in parallel before proceeding the system simply checks if a seat-version hasn’t changed. If it has a user is notified that the seat already been taken while she was thinking. This is a very simple example where version either can be 1 or 2. A more difficult situation could be in order-management systems where an order might have many versions but that doesn’t change the point how optimistic locking works.  The idea again yields great trade-off in terms of speed because we don’t use locking-mechanism.

Local atomic transactions and unique constraints

Local atomic transactions are usually restricted to a single store. Local transactions are primarily needed to apply a set of operations atomically to a single resource (e.g. relational database) as well as ensure correct ordering of operations within a transaction. In some cases, we can do away with transactions, particularly if we don’t care about the order of operations within a transaction. In that case we can process operations asynchronously leading to a better throughput again. Sometimes, a model requiring the order can be redesigned for  asynchronicity of operations.

Putting it all together

In order to achieve greater throughput a system should correspond to the following principles:

  1. You can retry the operation if there is a failure down the stream based on idempotence.
  2. Don’t use transactions and use optimistic locking if possible – it’s much cheaper.
  3. Local transactions based on a single phase commit for each resource are more scalable than distributed ones increasing overall application availability.
  4. Messages may be reordered.

Wrapping up

Such great systems as Google’s Bigtable or Spanner don’t support traditional ACID transactions because they have a heavy overhead on a highly distributed data storage model. I was lucky to use all above techniques in my applications too involving mission-critical financial transactions and must say that a few years ago not so many people knew about the techniques but now I can hear about them more and more often. Oh yeah, I almost forgot! I urge you to read this great article written by Pat Helland that has even more use-cases. I bumped at it during my research to know more. And remember, you can live without distributed transactions if you implement idempotence and downstream failures correctly.


1. Your Coffee Shop Doesn’t Use Two-Phase Commit by Gregor Hohpe.

2. Idempotence Is Not a Medical Condition by Pat Helland.

3. 2PC or not 2PC, Wherefore Art Thou XA? by Dan Pritchett.

4. Life beyond Distributed Transactions: an Apostate’s Opinion by Pat Helland.

Service Discovery in distributed systems

Quote Posted on Updated on

After quite a while I’ve decided to continue writing my blog. There are always lots of ideas to share with you but as usual a lack of time.


Service Discovery is an architectural pattern and a key component of most distributed systems and service oriented architectures.

 In simple terms it is a central gateway for all client applications that want to use other services. A client application doesn’t have to know where a particular service is located (usually IP:port), moreover a service can be moved/redeployed to arbitrary boxes for any reasons. There’s no need to change connection details, update configs or whatever – Service Discovery will take care of that. You as a client-app just need to ask it to get access to other services. The pattern yields a good benefit especially when there are hundreds of client applications and/or dozens of services.

Most of the work in this article was made relying on trials and errors as well some research. I urge you to read also a great survey from Jason Wilder’s blog on Open Source Service Discovery frameworks.

In this particular section I won’t mention CAP-properties as they can vary depending on implementation. But CAP is important. If you want to know more about it I recommend moving to this article devoted to the CAP-theorem.

High level components

Service Discovery is the process of finding a suitable Service and its location for a given task that asked a service consumer.  Let’s break down Service Discovery into 3 main components:

Service Requestor

Any consuming application that want to use some service,  that is a client application or a consumer, user, etc.

Service Provider

In the most basic scenario there is a Service Provider that finds this service and talks to Service Requestor.

Service Registry

Service Registry stores information about all services. It can be either static or dynamic. This can be IP:port, access rights and so forth.

This model comes directly from SOA but can be adapted to your needs. Conversation among 3 above entities might differ depending on implementation. As a quick example of such segregation refer to Web Service Discovery.

Discovery options

1. Static service registry

This one is a very basic and easy option. A big drawback – it is static. Every time a service moves to another box, static repo should be updated manually. We won’t go further here as it is quite obvious and have little benefit for flexibility.

2. Announcement

This pattern implies that as soon as a service is up, it registers itself in the Service Registry – sometimes such a notification is called Announcement. This simple approach greatly simplifies maintenance of service registry. It’s worth mentioning  also that in distributed world we always care about No Single Point Of Failure principle where we have at least 2 redundant nodes for a service. Service Registry is usually implemented as a separate  independent component to be self-sufficient. Just for simplicity the registry can be a simple replicated memCached, ehCache or another storage, not necessarily cache.

A few words about ZooKeeper as a solution for Service Discovery.

A counterexample is ZooKeeper that does’t follow Shared Nothing architectural pattern. Service Registry is coupled with the Service Provider. In ZooKeeper’s world coordination-nodes in ensemble are NOT independent of each other because they share Service Registry and there’s a write-contention due to the order of messages. The last word leads to ZooKeeper’s poor write-scalability due to its synchronous replication among nodes. But in terms of requirements most applications involved into configuration management including Service Discovery don’t require frequent writes but rather reads not blocking overall scalability.

Let’s pick up where we left off –  announcement.

The process of announcement is depicted below:


3. Multicasting

Using IGMP-protocol, a Service provider subscribes to a particular group, all services send a multicast message to this group. The downside of this approach is that IGMP is optional for IPv4 and not every hardware supports it.

Health checking

Once a service is registered in Service Registry a Service Provider needs to know if it’s alive and healthy (that is, 100% of service is available).

Option 1 – on reconnect: As soon as a Service Requestor understands that there’s no connection to the service it worked with, it reverts back to the Service Provider to get a new one. Each time a Service Requestor accesses Service Provider, one does health-checking (let’s say it is almost the same as heartbeats plus probably some other stuff) to make sure that the service is alive, maintaining the Service Registry and returning response to the client containing information about a healthy service.

Option 2 – heartbeating: Service Provider receives heartbeats within equal time-slices from registered services. Once heartbeat is lost, the Service Provider removes corresponding record from the Service Registry. This is a good option to get a more real-time information though it is more difficult to implement. This way works ZooKeeper and other distributed systems.

Service Down_pn

In both options if some service goes down clients get back to Service Provider. What strategy to use is up to the developer but option 1 is more complex but a better one in most scenarios.

Load Balancing

Load balancing can be easily implemented inside Service Provider as one becomes a central gateway of access for all Service Requestors to other services. For the sake of simplicity a very simple variation is round-robin. I won’t go into details as it is a different topic and you can find on the internet.


Again as with load balancing you might want to apply tokens with expiration or cryptographic keys.

Cooperation of all components

On a very high level the communication of components can happend as on the following picture:

Service Request_pb


We have defined what a Service Discovery is, broke it down into 3 components and described a few basic approaches.

HighLoad Conference 2013

Image Posted on Updated on

Today I was lucky to have attended the annual developers conference HighLoad++ 2013. This is the 7-th time. Some conference sessions are translated into English and available here:


Listeners heard many buzzwords like horizontal scalability, replication, sharding and NoSql. Despite that I missed the first day of the conference what I really liked was not just new trends in technology but invaluable speakers’ experience  on real projects with pros and cons, especially:

1.  “MySql versus something else” by Mark Callaghan [FaceBook] on storage efficiency, framework for analysis and benchmarking and database algorithms:


2. “Cassandra vs In-Memory Data Grid in eCommerce” by Alexander Soloviev  [Grid Dynamics] was very informative on deep analysis and benchmarking against different cases and their pros and cons.

3. “Query Optimizer in MariaDB: now w/o indices” by Sergey Golubchik [Monty Program Ab]

MinHash algorithm or how to quickly find similarities among 2 documents

Quote Posted on Updated on

MinHash is a technique from Locality Sensitive Hashing allowing to find similarities among 2 sets. This is a  buzzword frequently met in Data Mining  and Data Science fields of CS. What surprising is that this method was invented in 1997 and used in AltaVista web-search engine back in the 90s to find similarities among web-documents and it also can be used to:

  • Find duplicates
  • Near duplicate image detection
  • Near neighbor search

Basically the algorithm can be applied to anything that can be presented by numbers.

Let’s start with a bit of math from theory of probability and statistics.

Define a formula of two sets A and B:

 J(A,B) = {{|A \cap B|}\over{|A \cup B|}}.

This is so-called  a Jaccard coefficient.

Where: J ∈ [0..1]

j = 0 – if A ∩ B = 0, that is 2 sets are disjoint meaning there are no similarities

j = 1 – if A ∩ B = A = B, that is 2 sets are identical.

A, B are more similar when their Jaccard coefficient is closer to 1.

This simple formula is cumbersome if the sets are quite large, e.g. 2 web-documents of more than 1MB in size. Ouch, that’s too much. 1MB of text-data is 1,048,576 characters provided that 1 ASCII char = 8 bits (of course for unicode charset it is greater).

Now that we understand a bit of theory let’s try to apply hashing to Jaccard coefficient. Everywhere I hear hashing it always leads to randomized algorithms.

Ok, let’s move on. The main idea is that similar objects hash to the same bucket. This follows from the fact that probability of collision higher for similar objects.

Here we give an example for 2 sets A and B but the algorithm can be applied to any number of sets.

1. Essentially, we need to construct a set of independent hash functions <h1,h2,h3,…hk> randomly.  k = O(1/ε2), ε > 0 such that the expected error of the estimate is at most ε. For example, 400 hashes would be required to estimate J(A,B) with an expected error less than or equal to .05. So, k can be varied to increase/decrease the likelihood of false negatives.

2. Next we initialize for each set A and B the  value to infinity.

3. For each element s in both sets A and B we compute the element’s hash:

 such as: If  then .

Eventually we should have  for both sets A and B.

4. If 2 sets A and B are similar then the probability P(  A =  B) = |A ∩ B| / |A U B|- is high and it is the actual Jaccard coefficient!

5. We calculated   statistics to estimate how similar are these 2 sets. General formula is: Similarity = identical s / k

In real world this requires considering more thoroughly different parameters, hash-functions etc. However, to demonstrate the algorithm I wrote a simple java code:

import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Random;
import java.util.Set;

public class LSHMinHash<T> {

    private final int hashes[];
    private final int numOfHashes;
    private final int numOfSets;
    private final Set<T> setA;
    private final Set<T> setB;
    private final Map<T, boolean[]> matrix;
    private final int[][] minHashes;

    public LSHMinHash(double e, Set<T> setA, Set<T> setB){
        this.numOfHashes = (int)(1 / (e * e));
        this.numOfSets = 2;
        this.setA = setA;
        this.setB = setB;
        matrix = buildSetMatrix(setA, setB);
        minHashes = initMinHashes(numOfSets, numOfHashes);
        hashes = computeHashes(numOfHashes);

    private Map<T,boolean[]> buildSetMatrix(Set<T> setA, Set<T> setB) {

        Map<T,boolean[]> matrix = new HashMap<T,boolean[]>();

        for(T element : setA){
            matrix.put(element, new boolean[] { true, false } );

        for(T element : setB){
                matrix.put(element, new boolean[] { true, true } );
            }else if(!matrix.containsKey(element)){
                matrix.put(element, new boolean[] { false, true } );

        return matrix;

    private int[][] initMinHashes(int numOfSets, int numOfHashes) {
        int[][] minHashes = new int[numOfSets][numOfHashes];

        for (int i = 0; i < numOfSets; i++) {
            for (int j = 0; j < numOfHashes; j++) {
                minHashes[i][j] = Integer.MAX_VALUE;
        return minHashes;

    private int[] computeHashes(int numOfHashes) {
        int[] hashes = new int[numOfHashes];
        Random r = new Random(31);

        for (int i = 0; i < numOfHashes; i++){
            int a = (int)r.nextInt();
            int b = (int)r.nextInt();
            int c = (int)r.nextInt();
            hashes[i] = (int)((a * (a * b * c >> 4) + a * a * b * c + c) & 0xFFFFFFFF);
        return hashes;

    private void computeMinHashForSet(Set<T> set, int setIndex){
        int hashIndex = 0;

        for(T element : matrix.keySet()) {
            for (int i = 0; i < numOfHashes; i++) {
                if(set.contains(element)) {
                    int hashValue = hashes[hashIndex];
                    if (hashValue < minHashes[setIndex][hashIndex]) {
                        minHashes[setIndex][hashIndex] = hashValue;

    private double computeMinHash(int[][] minHashes, int numOfHashes) {
        int identicalMinHashes = 0;
        for (int i = 0; i < numOfHashes; i++){
            if (minHashes[0][i] == minHashes[1][i]) {
        return (1.0 * identicalMinHashes) / numOfHashes;

    public double findSimilarities() {
        computeMinHashForSet(setA, 0);
        computeMinHashForSet(setB, 1);
        return computeMinHash(minHashes, numOfHashes);

    public static void main(String[] args){
        Set<String> setA = new HashSet<String>();
        setA.add("IS ");

        Set<String> setB = new HashSet<String>();
        setB.add("IS ");

        double errorFactor = 0.001;

        LSHMinHash<String> minHash = new LSHMinHash<String>(errorFactor, setA, setB);

Google Code Jam. Qualification Round 2013. Problem D – Treasure. Solved.

Quote Posted on Updated on


This was the problem I wasn’t able to optimise with given time constraint in April.  Later I killed half a day to complete it thinking of different algorithms from the graph theory such as Eulerian path
and Chinese postman problems. I was surprised at that time this problem was put into qualification round as it is really challenging. In order to solve one the graph problem should be refined and modeled properly. During modeling a graph I used to always map only one type of entity to vertices.  As a counter-example a Bipartite graph has vertices which are divided into two disjoint sets U and V such that every edge connects a vertex in U to one in V. The solution to this one has nothing to do with Bipartite graph attacking approaches except that there are 2 types of vertices.

The solution

1. Check that the number of chests matches the number of corresponding key types. A simple arithmetic to count. Otherwise there is no solution as we don’t have enough keys to open all chests.

2. All vertices in a graph G should be reachable from vertex v0.

Construct a directed graph G=(V,E), where K is a set of all keys and C is a set of all chests:

  • V = {K} U {C} U {0}.
  • {0} = v0 is a starting vertex corresponding to the root vertex. Outgoing edges from it are key(s) given us initially.
  • E = {Set of all directed edges connecting K and C}. (i,j) ∈ E if either {i=chest, j=key} or {i=key, j=chest}

In simple words we have a directed graph where each vertex is either a chest or a key, directed edges form a  connection in between. We need to check that there’s a directed path between vertex v0 and every other vertex ∈ E, otherwise we cannot open all chests and solution is impossible. Note that we need to check directed paths to all key-vertices from v0. This is a standard single-source reachability and can be done applying a Depth-First-Search algorithm:

void dfs(Graph g, int v) {
marked[v] = true;
for (int w : g.adj(v)) {
if (!marked[w]) dfs(G, w);

Technically, if there are unmarked vertices, the problem is unsolvable. Time complexity is O(E+V). This check cuts lots of branches saving us time.

Now let’s demonstrate the above on a real example. Our treasure trove consists of four chests, and you began with exactly one key of type 1.

See the table with input data:

Chest Number  |  Key Type To Open Chest  |  Key Types Inside
1             |  1                       |  None
2             |  1                       |  1, 3
3             |  2                       |  None
4             |  3                       |  2

Let’s draw a graph where v0 is a staring point of traversal (i.e. initial key of type = 1)


As depicted there are 2 solvable configurations – <2,1,4,3> and <2,4,3,1>. All paths from v0 to the keys are reachable. The proof of this condition can done by induction and left to the reader.

3. Find a “lexicographically smallest” solution. This last condition added real hardness to the problem. If there are multiple solutions then we find a “lexicographically smallest” way to open the boxes. Each vertex is always traversed from lowest to highest, this can be achieved by constructing a graph using adjacency lists that reflect numbers of keys and chests in increasing order and we subsequently iterate them applying a bit optimized Depth-First-Search. Minimal configuration starts from v0. If we have  >= 1 key initially given, we just have to traverse from all of them increasingly. On each step we always update a number of keys in possession. If we’ve run out of keys then we cut the branch and backtrack, starting again where we left off with the next configuration. E.g. opening a chest No. 1 at the very first time will lead to the deadlock (actually this one is a trivial case as it is on step No 1). Such cases are not on critical path and we need backtrack only to the previous calling vertex.

Conclusion: There are not so many configurations unlike straightforward brute force. Many optimization problems like this one are met in algorithmic contests. Just algorithms and data structures are not sufficient. To pass given time constraint there should be the right strategy to cut unsolvable branches. That’s it!