I wonder how ancient civilizations, that have built incredible structures, knew that once built they would remain steady. I’m not talking about mysterious pyramids but more recent landmarks left us by romans such as roman aqueducts. How sure were they on the resiliency to nature powers, strong winds and tough winters? Well… the answer seems to be easy: with lots of theoretic work, designs and calculations.
While this is true, things were also achieved with lots of experimentation, both on the structural side and on materials applications. That knowledge was vital to have good practices on what to use in each situation and predict maintenance and consequences in extreme conditions. The same applies to the conception of fault-tolerant reliable systems to process large volumes of data.
Normal approaches start with mere reasonable assumptions on data volume, ingestion rate, document size, but others are hard to predict such as processing time, indexing time, etc. It really helps to know how components behave under pressure and calculate what are the limits of the current design.
Many (self-entitled) architects like to take the designing process as cooking a big blend of hype-based technologies and it’s very easy to get burnt. Key factors such as SLAs, peak times and hardware limitations greatly affect which components you choose to put in the pan and how would you mix them together.
The only way to know how certain software components behave, they have to be exercised with a great volume of data, similar to the one they will process once in production. Data that can be just fed from the productions servers if it exists and if the infrastructure allows. More often than not, the data is not yet available in the target formats and there’s the necessity of trying out with generated data in the chosen formats.
Whenever I want to test and benchmark the systems I’m working on, I use a synthetic data generator called log-synth. Log synth is one of those swiss-army knives for data generation. It has plenty of generators, based on parametric statistical methods. The good part is that it’s open-source and can be easily extended with new generating algorithms and output formats.
The most common output formats are JSON and CSV.
Recently I’ve added a template based generator that extends the range of available output formats. It leverages the data generation algorithms that ship with it and feeds that into a templated document using Freemarker templating language.
To generate a sample vCard simply create a file
1 2 3 4 5 6 7 8 9 10 11
Then a schema file, let’s call it
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
To invoke the log-synth, just do:
java -cp .:./target/log-synth-0.1-SNAPSHOT-jar-with-dependencies.jar com.mapr.synth.Synth -count 5000 -schema schema.txt -template template.txt -format TEMPLATE -output output/
The output documents will end up in the output/ folder as expected and they will look like:
1 2 3 4 5 6 7 8 9 10 11
I invite you to explore this very handy tool on Github.