Thursday, 9 December 2010

Readable Tests: Separating intent from implementation

Very recently, I was working on a test class like this:

public class AnalyticsExpirationDateManagerTest extends TestCase {

private static final long ONE_HOUR_TIMEOUT = 1000 * 60 * 60;
private static final long TWO_HOUR_TIMEOUT = ONE_HOUR_TIMEOUT * 2;

private Map<Parameter, Long> analyticsToTimeout;
private long defaultTimeout;

private Parameter minTimeoutParam;
@Mock private CacheKeyImpl<Parameter> cacheKey;

@Override
protected void setUp() throws Exception {
MockitoAnnotations.initMocks(this);

this.minTimeoutParam = new Parameter("minTimeout", "type");

when(cacheKey.getFirstKey()).thenReturn(minTimeoutParam);

this.analyticsToTimeout = new HashMap<Parameter, Long>();
this.defaultTimeout = 0;
}

public void
testGetExpirationDateWhenAnalyticsToTimeoutsAndCacheKeyAreEmpty() {
AnalyticsExpirationDateManager<Long> manager =
new AnalyticsExpirationDateManager<Long>(analyticsToTimeout, defaultTimeout);
Date date = manager.getExpirationDate(cacheKey, 0L);
assertNotNull(date);
}

public void
testGetExpirationDateWithMinimunTimeoutOfOneHour() {
this.analyticsToTimeout.put(this.minTimeoutParam, ONE_HOUR_TIMEOUT);
Collection<Parameter> cacheKeysWithMinTimeoutParam = new ArrayList<Parameter>();
cacheKeysWithMinTimeoutParam.add(this.minTimeoutParam);
when(this.cacheKey.getKeys()).thenReturn(cacheKeysWithMinTimeoutParam);

AnalyticsExpirationDateManager<Long> manager =
new AnalyticsExpirationDateManager<Long>(analyticsToTimeout, defaultTimeout);
Date date = manager.getExpirationDate(cacheKey, 0L);

assertNotNull(date);
Calendar expirationDate = Calendar.getInstance();
expirationDate.setTime(date);

Calendar currentDate = Calendar.getInstance();

// Check if expiration date is one hour ahead current date.
int expirationDateHour = expirationDate.get(Calendar.HOUR_OF_DAY);
int currentDateHour = currentDate.get(Calendar.HOUR_OF_DAY);
assertTrue(expirationDateHour - currentDateHour == 1);
}

public void
testGetExpirationDateWhenCacheKeyIsNullAndDefaultTimeoutIsOneHour() {
CacheKeyImpl<Parameter> NULL_CACHEKEY = null;
AnalyticsExpirationDateManager<Long> manager =
new AnalyticsExpirationDateManager<Long>(analyticsToTimeout, ONE_HOUR_TIMEOUT);
Date date = manager.getExpirationDate(NULL_CACHEKEY, 0L);

assertNotNull(date);
Calendar expirationDate = Calendar.getInstance();
expirationDate.setTime(date);

Calendar currentDate = Calendar.getInstance();

// Check if expiration date hour is the same of current date hour.
// When cache key is null, system date and time is returned and default timeout is not used.
int expirationDateHour = expirationDate.get(Calendar.HOUR_OF_DAY);
int currentDateHour = currentDate.get(Calendar.HOUR_OF_DAY);
assertTrue(expirationDateHour - currentDateHour == 0);
}

public void
testGetExpirationDateWithDefaultTimeout() {
// Default timeout is used when no time out is specified.
Collection<Parameter> cacheKeysWithoutTimeoutParam = new ArrayList<Parameter>();
cacheKeysWithoutTimeoutParam.add(new Parameter("name", "type"));
when(this.cacheKey.getKeys()).thenReturn(cacheKeysWithoutTimeoutParam);

AnalyticsExpirationDateManager<Long> manager =
new AnalyticsExpirationDateManager<Long>(analyticsToTimeout, ONE_HOUR_TIMEOUT);
Date date = manager.getExpirationDate(cacheKey, 0L);

assertNotNull(date);
Calendar expirationDate = Calendar.getInstance();
expirationDate.setTime(date);

Calendar currentDate = Calendar.getInstance();

// Check if expiration date is one hour ahead current date.
int expirationDateHour = expirationDate.get(Calendar.HOUR_OF_DAY);
int currentDateHour = currentDate.get(Calendar.HOUR_OF_DAY);
assertTrue(expirationDateHour - currentDateHour == 1);
}

public void
testGetExpirationDateWhenMinTimeoutIsSetAfterCreation() {
AnalyticsExpirationDateManager<Long> manager =
new AnalyticsExpirationDateManager<Long>(analyticsToTimeout, ONE_HOUR_TIMEOUT);
manager.setExpirationTimeout(this.minTimeoutParam.getName(), TWO_HOUR_TIMEOUT);

Date date = manager.getExpirationDate(cacheKey, 0L);

assertNotNull(date);
Calendar expirationDate = Calendar.getInstance();
expirationDate.setTime(date);

Calendar currentDate = Calendar.getInstance();

// Check if expiration date is two hour ahead current date.
int expirationDateHour = expirationDate.get(Calendar.HOUR_OF_DAY);
int currentDateHour = currentDate.get(Calendar.HOUR_OF_DAY);
assertTrue("Error", expirationDateHour - currentDateHour == 2);
}

}

Quite frightening, isn't it? Very difficult to understand what's going on there.

The class above covers 100% of the class under test and all the tests are valid tests, in terms of what is being tested.

Problems

There are quite a few problems here:
- The intent (what) and implementation (how) are mixed, making the tests very hard to read;
- There is quite a lot of duplication among the test methods;
- There is also a bug in the test methods when comparing dates, trying to figure out how many hours one date is ahead of the other. When running these tests in the middle of the day, they work fine. If running them between 22:00hs and 00:00hs, they break. The reason is that the hour calculation does not take into consideration the day.

Making the tests more readable

Besides testing the software, tests should also be seen as documentation, where business rules are clearly specified. Since the tests here are quite messy, understanding the intention and detecting bugs can be quite difficult.

I've done quite a few refactorings to this code in order to make it more readable, always working in small steps and constantly re-running the tests after each change. I'll try to summarise my steps for clarity and brevity.

1. Fixing the hour calculation bug

One of the first things that I had to do was to fix the hour calculation bug. In order to fix the bug across all test methods, I decided to extract the hour calculation into a separate class, removing all the duplication from the test methods. Using small steps, I took the opportunity to construct this new class called DateComparator (yes, I know I suck naming classes) using some internal Domain Specific Language (DSL) techniques.

public class DateComparator {

private Date origin;
private Date target;
private long milliseconds;
private long unitsOfTime;

private DateComparator(Date origin) {
this.origin = origin;
}

public static DateComparator date(Date origin) {
return new DateComparator(origin);
}

public DateComparator is(long unitsOfTime) {
this.unitsOfTime = unitsOfTime;
return this;
}

public DateComparator hoursAhead() {
this.milliseconds = unitsOfTime * 60 * 60 * 1000;
return this;
}

public static long hours(int hours) {
return hoursInMillis(hours);
}

private static long hoursInMillis(int hours) {
return hours * 60 * 60 * 1000;
}

public boolean from(Date date) {
this.target = date;
return this.checkDifference();
}

private boolean checkDifference() {
return (origin.getTime() - target.getTime() >= this.milliseconds);
}
}

So now, I can use it to replace the test logic in the test methods.

2. Extracting details into a super class

This step may seem a bit controversial at first, but can be an interesting approach for separating the what from how. The idea is to move tests set up, field declarations, initialisation logic, everything that is related to the test implementation (how) to a super class, leaving the test class just with the test methods (what).

Although this many not be a good OO application of the IS-A rule, I think this is a good compromise in order to achieve better readability in the test class.

NOTE: Logic can be moved to a super class, external classes (helpers, builders, etc) or both.

Here is the super class code:

public abstract class BaseTestForAnalyticsExperationDateManager extends TestCase {

protected Parameter minTimeoutParam;
@Mock protected CacheKeyImpl<Parameter> cacheKey;
protected Date systemDate;
protected CacheKeyImpl<Parameter> NULL_CACHEKEY = null;
protected AnalyticsExpirationDateManager<Long> manager;

@Override
protected void setUp() throws Exception {
MockitoAnnotations.initMocks(this);
this.minTimeoutParam = new Parameter("minTimeout", "type");
when(cacheKey.getFirstKey()).thenReturn(minTimeoutParam);
this.systemDate = new Date();
}

protected void assertThat(boolean condition) {
assertTrue(condition);
}

protected void addMinimunTimeoutToCache() {
this.configureCacheResponse(this.minTimeoutParam);
}

protected void doNotIncludeMinimunTimeoutInCache() {
this.configureCacheResponse(new Parameter("name", "type"));
}

private void configureCacheResponse(Parameter parameter) {
Collection<Parameter> cacheKeysWithMinTimeoutParam = new ArrayList<Parameter>();
cacheKeysWithMinTimeoutParam.add(parameter);
when(this.cacheKey.getKeys()).thenReturn(cacheKeysWithMinTimeoutParam);
}
}

3. Move creation and configuration of the object under test to a builder class

The construction and configuration of the AnalyticsExpirationDateManager is quite verbose and adds a lot of noise to the test. Once again I'll be using a builder class in order to make the code more readable and segregate responsibilities. Here is the builder class:

public class AnalyticsExpirationDateManagerBuilder {

protected static final long ONE_HOUR = 1000 * 60 * 60;

protected Parameter minTimeoutParam;
private AnalyticsExpirationDateManager<Long> manager;
private Map<Parameter, Long> analyticsToTimeouts = new HashMap<Parameter, Long>();
protected long defaultTimeout = 0;
private Long expirationTimeout;
private Long minimunTimeout;

private AnalyticsExpirationDateManagerBuilder() {
this.minTimeoutParam = new Parameter("minTimeout", "type");
}

public static AnalyticsExpirationDateManagerBuilder aExpirationDateManager() {
return new AnalyticsExpirationDateManagerBuilder();
}

public static long hours(int quantity) {
return quantity * ONE_HOUR;
}

public AnalyticsExpirationDateManagerBuilder withDefaultTimeout(long milliseconds) {
this.defaultTimeout = milliseconds;
return this;
}

public AnalyticsExpirationDateManagerBuilder withExpirationTimeout(long milliseconds) {
this.expirationTimeout = new Long(milliseconds);
return this;
}

public AnalyticsExpirationDateManagerBuilder withMinimunTimeout(long milliseconds) {
this.minimunTimeout = new Long(milliseconds);
return this;
}

public AnalyticsExpirationDateManager<Long> build() {
if (this.minimunTimeout != null) {
analyticsToTimeouts.put(minTimeoutParam, minimunTimeout);
}
this.manager = new AnalyticsExpirationDateManager(analyticsToTimeouts, defaultTimeout);
if (this.expirationTimeout != null) {
this.manager.setExpirationTimeout(minTimeoutParam.getName(), expirationTimeout);
}
return this.manager;
}

}

The final version of the test class

After many small steps, that's how the test class looks like. I took the opportunity to rename the test methods as well.

import static com.mycompany.AnalyticsExpirationDateManagerBuilder.*;
import static com.mycompany.DateComparator.*;

public class AnalyticsExpirationDateManagerTest extends BaseTestForAnalyticsExperationDateManager {

public void
testExpirationTimeWithJustDefaultValues() {
manager = aExpirationDateManager().build();
Date cacheExpiration = manager.getExpirationDate(cacheKey, 0L);
assertThat(dateOf(cacheExpiration).is(0).hoursAhead().from(systemDate));
}

public void
testExpirationTimeWithMinimunTimeoutOfOneHour() {
addMinimunTimeoutToCache();
manager = aExpirationDateManager()
.withMinimunTimeout(hours(1))
.build();
Date cacheExpiration = manager.getExpirationDate(cacheKey, 0L);
assertThat(dateOf(cacheExpiration).is(1).hoursAhead().from(systemDate));
}

public void
testExpirationTimeWhenCacheKeyIsNullAndDefaultTimeoutIsOneHour() {
manager = aExpirationDateManager()
.withDefaultTimeout(hours(1))
.build();
Date cacheExpiration = manager.getExpirationDate(NULL_CACHEKEY, 0L);
// When cache key is null, system date and time is returned and default timeout is not used.
assertThat(dateOf(cacheExpiration).is(0).hoursAhead().from(systemDate));
}

public void
testExpirationTimeWithDefaultTimeout() {
doNotIncludeMinimunTimeoutInCache();
manager = aExpirationDateManager()
.withDefaultTimeout(hours(1))
.build();
Date cacheExpiration = manager.getExpirationDate(cacheKey, 0L);
assertThat(dateOf(cacheExpiration).is(1).hoursAhead().from(systemDate));
}

public void
testExpirationTimeWhenExpirationTimeoutIsSet() {
manager = aExpirationDateManager()
.withDefaultTimeout(hours(1))
.withExpirationTimeout(hours(2))
.build();
Date cacheExpiration = manager.getExpirationDate(cacheKey, 0L);
// Expiration timeout has precedence over default timeout.
assertThat(dateOf(cacheExpiration).is(2).hoursAhead().from(systemDate));
}

}


Conclusion

Test classes should be easy to read. They should express intention, system behaviour, business rules. Test classes should express how the system works. They are executable requirements and specifications and should be a great source of information for any developer joining the project.

In order to achieve that, we need to try to keep our test methods divided in just 3 simple instructions.

1. Context: The state of the object being tested. Here is where we set all the attributes and mock dependencies. Using variations of the Builder pattern can greatly enhance readability.
manager = aExpirationDateManager()
.withDefaultTimeout(hours(1))
.withExpirationTimeout(hours(2))
.build();

2. Operation: The operation being tested. Here is where the operation is invoked.
Date cacheExpiration = manager.getExpirationDate(cacheKey, 0L);

3. Assertion: Here is where you specify the behaviour expected. The more readable this part is, the better. Using DSL-style code is probably the best way to express the intent of the test.
assertThat(dateOf(cacheExpiration).is(2).hoursAhead().from(systemDate));

In this post I went backwards. I've started from a messy test class and refactored it to a more readable implementation. As many people now are doing TDD, I wanted to show how we can improve an existing test. For new tests, I would suggest that you start writing the tests following the Context >> Operation >> Assertion approach. Try writing the test code in plain English. Once the test intent is clear, start replacing the plain English text with Java internal DSL code, keeping the implementation out of the test class.

PS: The ideas for this blog post came from a few discussions I had during the Software Craftsmanship Round-table meetings promoted by the London Software Craftsmanship Community (LSCC).

Tuesday, 7 December 2010

A basic ActiveRecord implementation in Java

Recently I was looking for different ways to develop applications in Java and thought that it would be interesting trying to use ActiveRecord in my persistence layer instead of implementing the traditional approach with a DAO. My idea is not to create my own ActiveRecord framework but use the existing ones as a support for this approach.

The scope: Write functional tests that could prove that the methods save and delete on an entity work. I'll use an entity called Traveller for this example.

The technology: I chose to use the following frameworks: Spring 3.0.5, JPA 2.0, Hibernate 3.5.3, AspectJ 1.6.9, JUnit 4.8.2, Maven 2.2.1, Eclipse Helios and MySQL 5.x

I'll be omitting things that are not too important. For all the details, please have a look at the whole source code at:

https://github.com/sandromancuso/cqrs-activerecord

Let's start with the test class:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
locations={
"file:src/test/resources/applicationContext-test.xml",
"file:src/main/resources/applicationContext-services.xml"
})
@TransactionConfiguration(transactionManager = "myTransactionManager", defaultRollback = true)
@Transactional
public class TravellerActiveRecordIntegrationTest extends BaseTravellerIntegration {

@Test public void
testTravellerSelfCreation() {
assertThereAreNoTravellers(named("John"), from("England"));

Traveller traveller = aTraveller().named("John").from("England").build();
traveller.save();

assertThereIsASingleTraveller(named("John"), from("England"));
}

@Test public void
testTravellerEdition() {
Traveller traveller = aTraveller().named("John").from("England").build();
traveller.save();

traveller.setName("Sandro");
traveller.setCountry("Brazil");
traveller.save();

assertThereAreNoTravellers(named("John"), from("England"));
assertThereIsASingleTraveller(named("Sandro"), from("Brazil"));
}

@Test public void
testDeleteTraveller() {
Traveller traveller = aTraveller().named("John").from("England").build();
traveller.save();

traveller.delete();
assertThereAreNoTravellers(named("John"), from("England"));
}

}

A few things to notice about this test class:
- As this test is meant to insert and delete from the database, I set it to always rollback the transaction after each test, meaning that nothing will be committed to the database permanently. This is important in order to execute the tests multiple times and get the same results.
- The test class extends a base class, that has the methods assertThereAreNoTravellers and assertThereIsASingleTraveller. The advantage of doing that is that you separate what you want to test from how you want to test, making the test class clean and focused on the intention.
- Note that I also used the Builder pattern to build the traveller instance instead of using the setter methods. This is a nice approach in order to make your tests more readable. 

The Traveller entity implementation

So now, let's have a look at the Traveller entity implementation.

@Entity
@Table(name="traveller")
@EqualsAndHashCode(callSuper=false)
@Configurable
public @Data class Traveller extends BaseEntity {

@Id @GeneratedValue(strategy=GenerationType.AUTO)
private long id;
private String name;
private String country;

}

The Traveller class is a normal JPA entity as you can see by the @Entity, @Table, @Id and @GenereratedValue annotations. To reduce the boiler place code like getters, setters, toString() and hashCode(), I'm using Lombok, that is a small framework that generate all that for us. Just add a @Data annotation.

The real deal here is Traveller's super class BaseEntity (for a lack of inspiration to find better name). Let's have a look:

@Configurable
public abstract class BaseEntity {

@PersistenceContext
protected EntityManager em;

public abstract long getId();

public void save() {
if (getId() == 0) {
this.em.persist(this);
} else {
this.em.merge(this);
}
this.em.flush();
}

public void delete() {
this.em.remove(this);
this.em.flush();
}

public void setEntityManager(EntityManager em) {
this.em = em;
}

}

The BaseEntity class has quite a few things that can be discussed.

The save() method: I've chosen to have a single method that can either insert or update an entity, based on its id. The problem with this approach is that, firstly, the method has more than one responsibility, making it a bit confusing to understand what it really does. Secondly it relies on the entities id, that needs to be a long, making it a very specific and weak implementation. The advantage is that from the outside (client code), you don't need to worry about the details. Just simple call save() and you are done. If you prefer a more generic and more cohesive implementation, make the getId() method return a generic type and split the save() method into a create() and update() methods. My idea here was just to make it simple to use.

EntityManager dependency: Here is where the biggest problem lies. For it to work well, every time a new instance of a entity is created, either by using new EntityXYZ() by hand or when it is created by a framework (e.g. as a result of a JPA / Hibernate query), we want the entity manager to be injected automatically. The only way I found to make it work is using aspects with AspectJ and Spring.

Configuring AspectJ and Spring

My idea here is not to give a full explanation about the whole AspectJ and Spring integration, mainly because I don't know it very well myself. I'll just give the basic steps to make this example work.

First add @Configurable to the Entity. This will tell that the entity will be managed by Spring. However, Spring is not aware of instances of the entity, in this case, Traveller, being created. This is why we need AspectJ. The only thing we need to do is to add the following line to our Spring context xml file.

<context:load-time-weaver />

This makes AspectJ intercept the creation of beans annotated with @Configurable and tells Spring to inject the dependencies. In order to the load-time weaver (LTW) work, we need to override JVM's class loading so that the first time our entity classes are loaded, AspectJ can kick in, the @Configurable annotation is discovered and all the dependencies are injected. For that we need to pass the following parameter to the JVM:

-javaagent:<path-to-your-maven-repository>/.m2/repository/org/springframework/spring-instrument/3.0.5.RELEASE/spring-instrument-3.0.5.RELEASE.jar

The snippet above is what we must use if using Spring 3.0.x. It works fine inside Eclipse but apparently it has some conflicts with Maven 2.2.1. If you run into any problems, you can use the old version below.

-javaagent:<path-to-your-maven-repository>/.m2/repository/org/springframework/spring-agent/2.5.6/spring-agent-2.5.6.jar

Another thing that is a good idea is to add the aop.xml file to your project, limiting the classes that will be affected by AspectJ. Add the aop.xml to src/main/resources/META-INF folder.




<include within="com.lscc.ddddemo.model.entity.*" />
<include within="com.lscc.ddddemo.model.entity.builder.*" />
<exclude within="*..*CGLIB*" />






NOTE: The exclude clause is important to avoid some conflicts during the integration test. AspectJ sometimes tries to do some magic with the test classes as well causing a few problems and the exclude clause avoids that.

Making the integration test work

I'll be using MySQL, so I'll need a database with the following table there:

CREATE TABLE `traveller` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(45) NOT NULL,
`country` varchar(45) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB

As I'm using JPA 2.0, we need a persistence.xml that must be located at src/main/resources/META-INF




com.lscc.ddddemo.model.entity.Traveller


<property name="hibernate.show_sql" value="true" />
<property name="hibernate.format_sql" value="true" />

<property name="hibernate.connection.driver_class" value="com.mysql.jdbc.Driver" />
<property name="hibernate.connection.url" value="jdbc:mysql://localhost:3306/lscc-ddd" />
<property name="hibernate.connection.username" value="root" />

<property name="hibernate.c3p0.min_size" value="5" />
<property name="hibernate.c3p0.max_size" value="20" />
<property name="hibernate.c3p0.timeout" value="300" />
<property name="hibernate.c3p0.max_statements"
value="50" />
<property name="hibernate.c3p0.idle_test_period"
value="3000" />





In our Spring configuration, we also need to tell Spring how to create an EntityManager, providing the EntityManager Factory:




<property name="persistenceUnitName" value="testPU" />


<tx:annotation-driven transaction-manager="myTransactionManager" />


<property name="entityManagerFactory" ref="entityManagerFactory" />




In order to separate unit tests and integration tests in my Maven project, I've added a different profile for the integration tests in my pom.xml:


with-integration-tests



org.apache.maven.plugins
maven-surefire-plugin

always
-javaagent:/Users/sandro.mancuso/.m2/repository/org/springframework/spring-agent/2.5.6/spring-agent-2.5.6.jar

2.5
true


integration-tests
integration-test

test



**/common/*


**/*IntegrationTest.java










So now, if you want to run the integration tests from the command line, just type:

mvn clean install -Pwith-integration-tests

If running the tests from inside Eclipse, don't forget to add the -javaagent paramenter to the VM.

Conclusion

So now, from anywhere in your application you can do some thing like:

Traveller traveller = new Traveller();
traveller.setName("John");
traveller.setCountry("England");
traveller.save();

The advantage of using the ActiveRecord:
- Code becomes much simpler;
- There is almost no reason for a DAO layer any more;
- Code is more explicit in its intent.

The disadvantages:
- Entities will need to inherit from a base class;
- Entities would have more than one responsibility (against the Single Responsibility Principle);
- Infrastructure layer (EntityManager) would be bleeding into our domain objects.

In my view, I like the simplicity of the ActiveRecord pattern. In the past 10 years we've been designing Java web applications where our entities are anaemic, having state (getters and setters) and no behaviour. So at the end, they are pure data structures. I feel that entities must be empowered and with techniques like that, we can do it, abstracting the persistence layer.  

I'm still not convinced that from now on I'll be using ActiveRecord instead of DAOs and POJOs but I'm glad that now there is a viable option. I'll need to try it in a real project, alongside with Command Query Responsibility Segregation (CQRS) pattern to see how I will feel about it. I'm really getting sick of the standard Action/Service/DAO way to develop web applications in Java instead of having a proper domain model.

By the way, to make the whole thing work, I had loads of problems to find the right combination of libraries. Please have a look at my pom.xml file for details. I'll be evolving this code base to try different things so when you look at it, it may not be exactly as it is described here.

https://github.com/sandromancuso/cqrs-activerecord

Interesting links with more technical details:
http://nurkiewicz.blogspot.com/2009/10/ddd-in-spring-made-easy-with-aspectj.html
http://blog.m1key.me/2010/06/integration-testing-your-spring-3-jpa.html

Wednesday, 1 December 2010

Routine Prediction

Not everybody remembers how easy it is and how effective it can be to add (or correct) a few points at the beginning of the FID. Two years ago I explained how Linear Prediction works and how we can extrapolate the FID in both directions. This time I will show a simple practical application.
I have observed that, in recent years, C-13 spectra acquired on Varian instruments require a much-larger-then-it-used-to-be phase correction. When I say correction, I mean first-order phase correction, because the zero-order correction is merely a different way of representing the same thing (a different perspective).
A large first-order phase correction can be substituted with linear prediction. I will show the advantage with a F-19 example, yet the concept is general.

The spectrum, after FT and before phase correction, looks well acquired. Now we apply the needed correction, which amounts to no less than 1073 degrees.

Have you noticed what happened to the baseline? It's all predictable. When you increase the phase correction, the baseline starts rolling. The higher the phase correction, the shorter the waves. With modern method for correcting the baseline we eliminate all the waves, yet there are two potential problems: 1) The common methods for automatic phase correction will have an hard time. 2) If you prefer manual phase correction, you need an expert eye to assess the symmetry of the peaks over such a rolling baseline. Anyway, just to show you that linear prediciton is not a necessity, here is the spectrum after applying standard baseline correction:

Now let's start from the FID again, this time applying linear prediction. one way to use it is to add the 3 missing points at the beginning. The result, after a mild phase correction (<10°) and before the baseline correction is:

The lesson is: by adding the missing points we correct the phase.
Alternatively we can both add the 3 missing points and recalculate the next 4 points. In this way the baseline improves a lot:

The lesson is: by recalculating the first few points of the FID we can correct the baseline of the frequency domain spectrum.

Wednesday, 27 October 2010

Promotional Sale


There are three possible reasons why you can be tempted by iNMR.
First reason: it's for research. It happens that they are not using iNMR in the industry, not because they don't like it, but because they don't buy Macs anymore in the industry. So, the majority of iNMR users are not doing repetitive activities. They don't ask to process 20 spectra in 20 seconds. Maybe they want to estimate the concentrations by time-consuming line-fitting or they want to monitor the phosphorylation of a protein by a series of thirty H-N HSQC, or they want to simulate the effect of a slow rotation, as they used to do with DNMR in the '70s. iNMR users asked for such things years ago and now you find them already into the program.
Second reason: students learn the program by themselves. Nowadays few research groups are pure-NMR-groups. When a new PhD students joins the lab, he has many techniques to learn, not just NMR processing. Luckily, iNMR has many things in common with the other applications he daily works with on his/her Mac. iNMR also helps novices to understand NMR processing because spectra are clearly depicted at every stage. A lot of things become natural after the first day of use.
Third reason: today Mestrelab has started a promotional sale, a sort of end-of-the-year clearance.
You can buy a disposable license at €90 instead of €150. You can download and try the program before buying.

Wednesday, 13 October 2010

OK then, first post!

My acclaimed puzzle/platform game Apple Jack was released at the end of May this year on the Xbox Indie games service. Here is a video of it:


"Oh", I hear you cry, "Acclaimed is it? Where's the proof?"
 To which I simply shake my head quietly and with a wry smile point you towards the following links:

http://www.eurogamer.net/articles/download-games-roundup-review-11th-june-2010?page=3
http://www.digitalspy.co.uk/gaming/levelup/a222963/indie-pick-apple-jack.html
http://www.armlessoctopus.com/2010/06/21/xbox-indie-review-apple-jack/#more-292
http://gaygamer.net/2010/06/weekly_xbox_indies_6210.html
http://www.xnplay.co.uk/xnplay-essentials-platformers/

"Yeah yeah" I hear you persist, "But those aren't PROPER reviews from respected print magazines, they're just the stupid old internet making up rubbish as usual. I bet you haven't got any reviews from, say,  Edge magazine and the Official Xbox Magazine have you?"

Ahem:

Edge (Issue 217, 8/10)
Official UK Xbox Magazine (Issue 64, 5 stars)


"Oh, OK.." You mumble, thoroughly chastened and embarassed, "So it IS acclaimed after all, I humbly apologise for my rudeness earlier and I will buy your excellent looking game forthwith."

Monday, 27 September 2010

Bad Code: The Invisible Threat

One of the first things said by the non-believers of the software craftsmanship movement was that good and clean code is not enough to guarantee the success of a project. And yes, they are absolutely right. There are innumerable reasons which would make a project to fail ranging from business strategy to competitors, project management, cost, time to market, partnerships, technical limitations, integrations, etc.

Due to the amount of important things a software project may have, organisations tend not to pay to much attention to things that are considered less important, like the quality of the software being developed. It is believed that with a good management team, deep hierarchies, micro management, strict process and a large amount of good documentation, the project will succeed.

In a software project, the most important deliverable is the software itself. Anything else is secondary.

Many organisations see software development as a production line where the workers (developers) are viewed as less skilled than their high-qualified and much more well paid managers. Very rarely companies like that will be able to attract or retain good software developers, leaving their entire business on the hands of mediocre professionals. 


Look after your garden

Rather than construction, programming is more like gardening. - The Pragmatic Programmer

Code is organic, not mechanical. Like a garden, code needs constant maintenance. For a garden to look nice all year round, you need to look after its soil, constantly remove weeds, regularly water it, remove some dead plants, replant new ones, trim or re-arrange existing ones so they can stay healthy and look nice as whole. With basic regular maintenance, the garden will always look great but if you neglect it, even if for a short period, the effort to make it nice again will be much bigger. The longer you neglect it, the harder it will be to make it look nice again and you may even loose some or all of your plants.

Code is no different. If code quality is not constantly looked after, the code starts to deteriorate. Bad design choices, lack of tests and poor use of languages and tools will make parts of the code to rot. Bit by bit other parts of the code will also be contaminated up to the point that the whole code base is so ill that it becomes extremely painful to change it or add new features to it.

The Invisible Threat

When starting a greenfield project, everything is great. With a non-existent code base, developers can quickly start creating new features without the fear of breaking or changing any existing code. Testers are happy because everything they need to test is new, meaning that they don't need to worry about regression tests. Managers can see a quickly progress in terms of new features added and delivered. What a fantastic first month the team is having. 

However, this is not a team of craftsmen. This is a team of average developers structured and treated like unskilled production line workers. 

As time goes by, things are getting messier, bugs start to appear (some with no apparent explanation) and features start taking longer and longer to be developed and tested. Very slowly, the time to deliver anything starts to stretch out. But this is a slow process. Slow enough that takes months, sometimes, over a year or two to be noticed by the management.


It's very common to see projects where, at the beginning of a project, a feature of size X takes N number of days to be implemented. Over the time, as more bad code is added to the application, the same feature X (or a feature of the same size) takes much longer to be implemented than it used to take at the beginning of the project. As the quality of the code decreases, the amount of time to implement a new feature, fix a bug or make a change increases. The lower the quality, the higher the number of bugs, the harder is to test and less robust and reliable the application becomes. 

Some people say that they just don't have time to do it properly but, in general, a lot more time and money is spent later on on tests and bug fixing.  

Hostage of your own software

When the code base gets into the situation where changes or additional features take too long to be implemented or worse, developers and managers are scared to touch existing code, an action must be taken immediately. This is a very dangerous situation to be since business progress is being impeded or delayed by the software instead of being helped by it.

To keep business progress, schedule and budget under control, high quality code needs to be maintained at all costs.

Organisations may need to cancel the implementation of some features or postpone changes just because of the amount of time and money that they may cost to be built. Having poor quality of code responsible for it is totally unacceptable. 

The biggest problem here is that bad code is invisible to everyone besides developers. Other members of the team will just realise that something is wrong when it is too late. This means that it is the developers responsibility to look after the quality of the code. Some times, developers expose the problem to project managers but the request for having some time to "re-factor" the code is often ignored for various reasons, including a lack of understand of the impacts of bad code and the inability of developers to explain it. On the other hand, when developers come to a point where they need to ask for some formal time to do refactoring, this means that for one reason or another, they neglected the code at some point in the past.

Hire craftsmen not average developers
 

With the amount of literature, tools, technologies, methodologies and the infinite source of information available on the web, it is just unacceptable to have a team of developers that let the code to rot.

Craftsmen are gardeners and are constantly looking after the code base, quickly refactoring it without fear since they are strongly backed by a good battery of tests that can test the entire application in just a few minutes. Time constraints or change in requirements will never be used as excuses for bad code or lack of tests due to the good design principles and techniques constantly used throughout the application.

Having an empowered team of craftsmen can be the difference between success and failure of any software project.

Quality of code may not guarantee the success of a project but it can definitely be the main invisible cause of its failure.