Jul 31, 2013

Snikt! Yet another OR-Mapper

Our new product is selling like crazy

image

I love Microsoft, especially when it comes to Visual Studio. I would do J2E if it supports. Years back, the glory days of one of the companies I worked, decided to migrate one of the products, super-duper banking app to windows platform, to .NET ofcoz. There were few day-end routines we wrote which processes transactions to accrued interests but not more than 15 of those. We noticed there is a significant performance leak, that it took almost half-a-day to finish off, and was absolutely unacceptable for the few thousand accounts the bank had.

It wasn’t a complicated solution, yet, the Memory and the CPU were hitting nearly 100% on the app server. There were stored command used to perform queries, .NET reflections used to late bound routines, but the leaks had nothing to do with these designs of the code.

The frameworks that made our lives rapid!

Yes, we have made few mistakes here and there, and then there is Typed Datasets used as the OR-mapper in the data-tier which made our rapid development run off the roof – that was a +1. When we are digging deeper into the core of the code, debugger consumed considerable time just to instantiate the datasets. It would have been acceptable but doing this for 100s of thousands was a big cost for us. There weren’t hand full of ORMs at that time, and the ADO.NET Typed Datasets was the thingy. We were not in a position to drop all the Typed Datasets to old-school way of mapping query results to business objects. A pragmatic approach was to use the core business object, which is the Account model only on the day-end processes as a proof of concept.

Few nights with cheesy pizza and beer, my colleague Buddhi Pathirane and I managed to run the day-end routines again with old-school mappings. SHAZAM! the processes did not even take more than couple of hours to complete. In general, most of the code we wrote had to deal with stored commands, mapping query results to domain models, than the business logic. It was the fastest we could do in terms of execution, but sadly there were new bugs introduced, then there is this code that we had to write and it made our grown up developers cry.

That omnishambolic moment!

One of those days, another critical feature was to migrate all the Swedish vehicle details to a SQL server database. Operations were mostly upserts for 7-10tables and the process took nearly 9hours for 4million records. This time, it was on LINQ-to-SQL and there is nothing much we can do to hint the DataContext to optimize queries. So we decided to write our own T-SQL commands. It’s something we are all quite familiar with.

Choosing Entity Framework over LINQ-to-SQL had many benefits. The business components need not to know what data model I use. So, we used Entity Framework’s promising Code First T4 template to split the Context from the domain model. It was the unicorn when we decided to use in almost all the projects in that company. Fellow developers loved it but the major issue is that it is 2-3 times slower, or even worst in some cases. So, for many scenarios I had to write my own queries, and so did everyone else.

This out-performed in most cases but sometimes the SQL generated by the query parser is similar to the ordinary SQL we write. This led us to another clue: performance leaks were not along with the queries designed by the ORMs. Both frameworks’ mapping logic was not fast enough for this matter.

How many times do I have to fall?

We were querying and mapping results to the business objects, so we used in-line SQL queries and stored commands every time there is a significant performance leak which is quite normal for us. Typed Datasets, LINQ-to-SQL, and Entity Framework have its positives but that’s what it has been hyper promoted by the owners and the followers, for any other full blown ORM for that matters.

All I need is an ORM that suites my data-tier which can map SQL query results to business objects, not their associations and so on. It is quite straightforward so I tried writing by my own. Here’s how the first version looked like. It would be a crazy idea to replace the entire ORM, so my intention was to tweak some leaks side-by-side with this mapper wherever applicable. I also thought to push an open-source version with few more enhancements. Here’s the GitHub repo to fork.

More resources:

Lazy Developer Series   .NET   ORM   Data Access   Snikt   ADO.NET   EF   LINQ   GitHub   



Permalink →

Comments

Apr 14, 2013

Cleanup unit test code for tombstoning this summer?

Is our new product safe to start selling?

image

During the warmest winter year 2007 a leading Swedish company providing solutions for Work Force Management, transforming their major products into .NET platform when I departed to Stockholm to work closely with couple of architects amongst pioneers in the development team. Their well-footed infrastructure and toolset were quite interesting at that time. UX, Scrum, DDD, TDD, and Continuous Inspection to Continuous Delivery among the terminology detailed where 80% unit test coverage threshold of the system under test was quite a showoff. How much test code is enough to keep everyone happy? No quantitative answer can satisfy everyone, but, enough tests to break the system keeps pumping adrenaline for developers to write more tests than production code. Doing Test-driven development and writing Unit tests have different connotations in the Agile Diaspora of the software industry. It is pity when test code is not treated equally important as the system under test. In my article called Lazy developer’s test strategies reduce bug count, here’s what I said about maintaining unit tests;

You have to change your tests when the production code changes. Your tests get evolved when you add more modules to production code. Eventually, maintaining the test suite becomes the biggest complaint (…)

Tombstoning for unit testing, especially in the agile nature, could often result in no hint of whatsoever about the test suite intent to verify. A smarter, or rather a lazy way to maintain a test suite, is to express with domain-specific test language. Inevitably, this brings out the ubiquitous meanings to the specifications that screams in the test suite. The key practices in this approach when maintaining readability of the test suite are;

  • Structure of the suite,
  • Conventions to adhere, and
  • Patterns to practice.

Snikt! is an open source code, can be found in GitHub, which demonstrates these attributes.

image

Structure of a test suite, is the firsts that screams at the glance. It’s also the firsts that speaks the same domain of its system under test. It’s the firsts that answers for many questions when dealing with specifications: which spec belongs to which interactor, what behaviors have being verified, and so on and so forth. Typically, this is quite similar to the structure of the production code, at least that’s how it feels when peeked.

Conventions, on the other hand, adds clarity and readability when organizing these specifications. There are few conventions to note here:

  • Convention used to logically group specifications that belongs to a single interactor, for instance, DatabaseSpecs namespace.
  • Convention used to name unit test specifications, for instance, WhenExecuteQuery class.
  • Convention used when writing assertions, for instance, ThenStrongTypedListIsReturned assertion.

It sounds boring – no drama at all – when writing a unit test. Almost every specification initialize or mock few dependencies, execute a method or two, and check the outcome. Its the same for any given use case. But, it takes time to idealize specifications for behaviors which are worth writing. A smarter approach is to use the Arrange-Act-Assert pattern when designing these specifications. A much clean approach, however, is to use the Build-Operate-Check pattern, plus, a naming convention to logically grouping assertions. It is called Single Concept per Test.

More resources:

Clean Code   Lazy Developer Series   Testing   BDD   TDD   Unit Testing   Test Suit   



Permalink →

Comments

Jan 28, 2013

Scream! Work-in-progress is a waste, do One feature at a time

The best thing about the future is that it comes one day at a time. – Abraham Lincoln

Hypothetically, if a team is left with only Two-thousand keystrokes – it’s just a random figure, enough to convince my mental model – the toughest challenge, despite the fancy features in the Product Backlog and the Build Matrices, is to deliver value for the money. For a novice team this would look nothing but chaos, but it doesn’t when the keystrokes are the limited window of opportunity to dive in the blue ocean.

Use agile programming methods

Use agile programming methods

Doing Scrum way of Product Engineering, the Agile Diaspora of the software industry, especially in offshore and outsource teams, keeps forgetting its very basics of delivering Working Software as a Team. In simple steps, Scrum starts by breaking down a product idea into a smaller features, which is then prioritized and shipped iteratively as a team that shares the same goal. Experts from Continuous Inspection to pretty much Continuous everything have been talking about this now and then. A smart decision, even in real world, is to push many stories to the right-corner of the Scrum Board. A much smarter decision, however, is to get the top stories first, obviously, but what it is then went wrong with delivering features?

An experienced team can have their interpretation of how they should be delivering features within the sprint, as their wish. Let’s accept, passionate developers compelled to pick stories that are fancy them more. It’s their ego that is not letting go of any technically challenged task. It is the same ego that keeps them over committed on work schedules, regardless of how good or bad their time management skills are.

You worked on tasks that aren’t important

You worked on tasks that arn't important

Matured teams can wish to formulate self-organized self-governed for themselves. The catch is, when the team was struggling, it’s that cynical habit of them to volunteer for troubleshooting, stepping out their own urgent and important stories, and multi-tasking. As Stephen R. Covey nicely explains in his book First Things First, it’s not unusual to see how people move onto not important activities that appears to them as urgent. Thus the Work-In-Progress cannot indicate the actuals of the sprint since there are many people involved with too many feature stories, simultaneously. The PO, management, stakeholders, and the team too have to be patiently wait till the end of the sprint to know the remaining work since Burn Down matrices do not adapt the numbers.

XP, or rather interesting Feature Crews – What do these have for Scrum teams? A way of pairing working model to glossing over, if adopted with care. There is nothing unusual in splitting the team among smaller ministries. Perhaps, not too many and not too little but just enough to get the features move forward without being blocked half-way through. It is said, two heads is always better than one. The changing scenario in Scrum teams begins by:

  • Pairing with an experienced developer when the code is not familiar.
  • Work-in-progress stories indicate the active stories that the team is currently working.
  • Working at a pace which is currently optimum for the team.
  • Number of stories that are slipped to the next sprint is minimal, or rather declined.

For the benefit of stakeholders, clients, and those who think the time has arrived to dive into the blue ocean, after sprinting few iterations, it is a matter of terminating the current sprint and compiling the features what was in the right-corner of the Scrum Board. Despite the features that can be shipped, the ones that has been discarded can be minimized with its focus on the limited work-in-progress.

More useful resources:

Scrum Teams   Scrum   Productivity   Lazy Developer Series   XP   Feature Crews   Lean   



Permalink →

Comments