Thursday, 9 December 2010

Readable Tests: Separating intent from implementation

Very recently, I was working on a test class like this:

public class AnalyticsExpirationDateManagerTest extends TestCase {

private static final long ONE_HOUR_TIMEOUT = 1000 * 60 * 60;
private static final long TWO_HOUR_TIMEOUT = ONE_HOUR_TIMEOUT * 2;

private Map<Parameter, Long> analyticsToTimeout;
private long defaultTimeout;

private Parameter minTimeoutParam;
@Mock private CacheKeyImpl<Parameter> cacheKey;

@Override
protected void setUp() throws Exception {
MockitoAnnotations.initMocks(this);

this.minTimeoutParam = new Parameter("minTimeout", "type");

when(cacheKey.getFirstKey()).thenReturn(minTimeoutParam);

this.analyticsToTimeout = new HashMap<Parameter, Long>();
this.defaultTimeout = 0;
}

public void
testGetExpirationDateWhenAnalyticsToTimeoutsAndCacheKeyAreEmpty() {
AnalyticsExpirationDateManager<Long> manager =
new AnalyticsExpirationDateManager<Long>(analyticsToTimeout, defaultTimeout);
Date date = manager.getExpirationDate(cacheKey, 0L);
assertNotNull(date);
}

public void
testGetExpirationDateWithMinimunTimeoutOfOneHour() {
this.analyticsToTimeout.put(this.minTimeoutParam, ONE_HOUR_TIMEOUT);
Collection<Parameter> cacheKeysWithMinTimeoutParam = new ArrayList<Parameter>();
cacheKeysWithMinTimeoutParam.add(this.minTimeoutParam);
when(this.cacheKey.getKeys()).thenReturn(cacheKeysWithMinTimeoutParam);

AnalyticsExpirationDateManager<Long> manager =
new AnalyticsExpirationDateManager<Long>(analyticsToTimeout, defaultTimeout);
Date date = manager.getExpirationDate(cacheKey, 0L);

assertNotNull(date);
Calendar expirationDate = Calendar.getInstance();
expirationDate.setTime(date);

Calendar currentDate = Calendar.getInstance();

// Check if expiration date is one hour ahead current date.
int expirationDateHour = expirationDate.get(Calendar.HOUR_OF_DAY);
int currentDateHour = currentDate.get(Calendar.HOUR_OF_DAY);
assertTrue(expirationDateHour - currentDateHour == 1);
}

public void
testGetExpirationDateWhenCacheKeyIsNullAndDefaultTimeoutIsOneHour() {
CacheKeyImpl<Parameter> NULL_CACHEKEY = null;
AnalyticsExpirationDateManager<Long> manager =
new AnalyticsExpirationDateManager<Long>(analyticsToTimeout, ONE_HOUR_TIMEOUT);
Date date = manager.getExpirationDate(NULL_CACHEKEY, 0L);

assertNotNull(date);
Calendar expirationDate = Calendar.getInstance();
expirationDate.setTime(date);

Calendar currentDate = Calendar.getInstance();

// Check if expiration date hour is the same of current date hour.
// When cache key is null, system date and time is returned and default timeout is not used.
int expirationDateHour = expirationDate.get(Calendar.HOUR_OF_DAY);
int currentDateHour = currentDate.get(Calendar.HOUR_OF_DAY);
assertTrue(expirationDateHour - currentDateHour == 0);
}

public void
testGetExpirationDateWithDefaultTimeout() {
// Default timeout is used when no time out is specified.
Collection<Parameter> cacheKeysWithoutTimeoutParam = new ArrayList<Parameter>();
cacheKeysWithoutTimeoutParam.add(new Parameter("name", "type"));
when(this.cacheKey.getKeys()).thenReturn(cacheKeysWithoutTimeoutParam);

AnalyticsExpirationDateManager<Long> manager =
new AnalyticsExpirationDateManager<Long>(analyticsToTimeout, ONE_HOUR_TIMEOUT);
Date date = manager.getExpirationDate(cacheKey, 0L);

assertNotNull(date);
Calendar expirationDate = Calendar.getInstance();
expirationDate.setTime(date);

Calendar currentDate = Calendar.getInstance();

// Check if expiration date is one hour ahead current date.
int expirationDateHour = expirationDate.get(Calendar.HOUR_OF_DAY);
int currentDateHour = currentDate.get(Calendar.HOUR_OF_DAY);
assertTrue(expirationDateHour - currentDateHour == 1);
}

public void
testGetExpirationDateWhenMinTimeoutIsSetAfterCreation() {
AnalyticsExpirationDateManager<Long> manager =
new AnalyticsExpirationDateManager<Long>(analyticsToTimeout, ONE_HOUR_TIMEOUT);
manager.setExpirationTimeout(this.minTimeoutParam.getName(), TWO_HOUR_TIMEOUT);

Date date = manager.getExpirationDate(cacheKey, 0L);

assertNotNull(date);
Calendar expirationDate = Calendar.getInstance();
expirationDate.setTime(date);

Calendar currentDate = Calendar.getInstance();

// Check if expiration date is two hour ahead current date.
int expirationDateHour = expirationDate.get(Calendar.HOUR_OF_DAY);
int currentDateHour = currentDate.get(Calendar.HOUR_OF_DAY);
assertTrue("Error", expirationDateHour - currentDateHour == 2);
}

}

Quite frightening, isn't it? Very difficult to understand what's going on there.

The class above covers 100% of the class under test and all the tests are valid tests, in terms of what is being tested.

Problems

There are quite a few problems here:
- The intent (what) and implementation (how) are mixed, making the tests very hard to read;
- There is quite a lot of duplication among the test methods;
- There is also a bug in the test methods when comparing dates, trying to figure out how many hours one date is ahead of the other. When running these tests in the middle of the day, they work fine. If running them between 22:00hs and 00:00hs, they break. The reason is that the hour calculation does not take into consideration the day.

Making the tests more readable

Besides testing the software, tests should also be seen as documentation, where business rules are clearly specified. Since the tests here are quite messy, understanding the intention and detecting bugs can be quite difficult.

I've done quite a few refactorings to this code in order to make it more readable, always working in small steps and constantly re-running the tests after each change. I'll try to summarise my steps for clarity and brevity.

1. Fixing the hour calculation bug

One of the first things that I had to do was to fix the hour calculation bug. In order to fix the bug across all test methods, I decided to extract the hour calculation into a separate class, removing all the duplication from the test methods. Using small steps, I took the opportunity to construct this new class called DateComparator (yes, I know I suck naming classes) using some internal Domain Specific Language (DSL) techniques.

public class DateComparator {

private Date origin;
private Date target;
private long milliseconds;
private long unitsOfTime;

private DateComparator(Date origin) {
this.origin = origin;
}

public static DateComparator date(Date origin) {
return new DateComparator(origin);
}

public DateComparator is(long unitsOfTime) {
this.unitsOfTime = unitsOfTime;
return this;
}

public DateComparator hoursAhead() {
this.milliseconds = unitsOfTime * 60 * 60 * 1000;
return this;
}

public static long hours(int hours) {
return hoursInMillis(hours);
}

private static long hoursInMillis(int hours) {
return hours * 60 * 60 * 1000;
}

public boolean from(Date date) {
this.target = date;
return this.checkDifference();
}

private boolean checkDifference() {
return (origin.getTime() - target.getTime() >= this.milliseconds);
}
}

So now, I can use it to replace the test logic in the test methods.

2. Extracting details into a super class

This step may seem a bit controversial at first, but can be an interesting approach for separating the what from how. The idea is to move tests set up, field declarations, initialisation logic, everything that is related to the test implementation (how) to a super class, leaving the test class just with the test methods (what).

Although this many not be a good OO application of the IS-A rule, I think this is a good compromise in order to achieve better readability in the test class.

NOTE: Logic can be moved to a super class, external classes (helpers, builders, etc) or both.

Here is the super class code:

public abstract class BaseTestForAnalyticsExperationDateManager extends TestCase {

protected Parameter minTimeoutParam;
@Mock protected CacheKeyImpl<Parameter> cacheKey;
protected Date systemDate;
protected CacheKeyImpl<Parameter> NULL_CACHEKEY = null;
protected AnalyticsExpirationDateManager<Long> manager;

@Override
protected void setUp() throws Exception {
MockitoAnnotations.initMocks(this);
this.minTimeoutParam = new Parameter("minTimeout", "type");
when(cacheKey.getFirstKey()).thenReturn(minTimeoutParam);
this.systemDate = new Date();
}

protected void assertThat(boolean condition) {
assertTrue(condition);
}

protected void addMinimunTimeoutToCache() {
this.configureCacheResponse(this.minTimeoutParam);
}

protected void doNotIncludeMinimunTimeoutInCache() {
this.configureCacheResponse(new Parameter("name", "type"));
}

private void configureCacheResponse(Parameter parameter) {
Collection<Parameter> cacheKeysWithMinTimeoutParam = new ArrayList<Parameter>();
cacheKeysWithMinTimeoutParam.add(parameter);
when(this.cacheKey.getKeys()).thenReturn(cacheKeysWithMinTimeoutParam);
}
}

3. Move creation and configuration of the object under test to a builder class

The construction and configuration of the AnalyticsExpirationDateManager is quite verbose and adds a lot of noise to the test. Once again I'll be using a builder class in order to make the code more readable and segregate responsibilities. Here is the builder class:

public class AnalyticsExpirationDateManagerBuilder {

protected static final long ONE_HOUR = 1000 * 60 * 60;

protected Parameter minTimeoutParam;
private AnalyticsExpirationDateManager<Long> manager;
private Map<Parameter, Long> analyticsToTimeouts = new HashMap<Parameter, Long>();
protected long defaultTimeout = 0;
private Long expirationTimeout;
private Long minimunTimeout;

private AnalyticsExpirationDateManagerBuilder() {
this.minTimeoutParam = new Parameter("minTimeout", "type");
}

public static AnalyticsExpirationDateManagerBuilder aExpirationDateManager() {
return new AnalyticsExpirationDateManagerBuilder();
}

public static long hours(int quantity) {
return quantity * ONE_HOUR;
}

public AnalyticsExpirationDateManagerBuilder withDefaultTimeout(long milliseconds) {
this.defaultTimeout = milliseconds;
return this;
}

public AnalyticsExpirationDateManagerBuilder withExpirationTimeout(long milliseconds) {
this.expirationTimeout = new Long(milliseconds);
return this;
}

public AnalyticsExpirationDateManagerBuilder withMinimunTimeout(long milliseconds) {
this.minimunTimeout = new Long(milliseconds);
return this;
}

public AnalyticsExpirationDateManager<Long> build() {
if (this.minimunTimeout != null) {
analyticsToTimeouts.put(minTimeoutParam, minimunTimeout);
}
this.manager = new AnalyticsExpirationDateManager(analyticsToTimeouts, defaultTimeout);
if (this.expirationTimeout != null) {
this.manager.setExpirationTimeout(minTimeoutParam.getName(), expirationTimeout);
}
return this.manager;
}

}

The final version of the test class

After many small steps, that's how the test class looks like. I took the opportunity to rename the test methods as well.

import static com.mycompany.AnalyticsExpirationDateManagerBuilder.*;
import static com.mycompany.DateComparator.*;

public class AnalyticsExpirationDateManagerTest extends BaseTestForAnalyticsExperationDateManager {

public void
testExpirationTimeWithJustDefaultValues() {
manager = aExpirationDateManager().build();
Date cacheExpiration = manager.getExpirationDate(cacheKey, 0L);
assertThat(dateOf(cacheExpiration).is(0).hoursAhead().from(systemDate));
}

public void
testExpirationTimeWithMinimunTimeoutOfOneHour() {
addMinimunTimeoutToCache();
manager = aExpirationDateManager()
.withMinimunTimeout(hours(1))
.build();
Date cacheExpiration = manager.getExpirationDate(cacheKey, 0L);
assertThat(dateOf(cacheExpiration).is(1).hoursAhead().from(systemDate));
}

public void
testExpirationTimeWhenCacheKeyIsNullAndDefaultTimeoutIsOneHour() {
manager = aExpirationDateManager()
.withDefaultTimeout(hours(1))
.build();
Date cacheExpiration = manager.getExpirationDate(NULL_CACHEKEY, 0L);
// When cache key is null, system date and time is returned and default timeout is not used.
assertThat(dateOf(cacheExpiration).is(0).hoursAhead().from(systemDate));
}

public void
testExpirationTimeWithDefaultTimeout() {
doNotIncludeMinimunTimeoutInCache();
manager = aExpirationDateManager()
.withDefaultTimeout(hours(1))
.build();
Date cacheExpiration = manager.getExpirationDate(cacheKey, 0L);
assertThat(dateOf(cacheExpiration).is(1).hoursAhead().from(systemDate));
}

public void
testExpirationTimeWhenExpirationTimeoutIsSet() {
manager = aExpirationDateManager()
.withDefaultTimeout(hours(1))
.withExpirationTimeout(hours(2))
.build();
Date cacheExpiration = manager.getExpirationDate(cacheKey, 0L);
// Expiration timeout has precedence over default timeout.
assertThat(dateOf(cacheExpiration).is(2).hoursAhead().from(systemDate));
}

}


Conclusion

Test classes should be easy to read. They should express intention, system behaviour, business rules. Test classes should express how the system works. They are executable requirements and specifications and should be a great source of information for any developer joining the project.

In order to achieve that, we need to try to keep our test methods divided in just 3 simple instructions.

1. Context: The state of the object being tested. Here is where we set all the attributes and mock dependencies. Using variations of the Builder pattern can greatly enhance readability.
manager = aExpirationDateManager()
.withDefaultTimeout(hours(1))
.withExpirationTimeout(hours(2))
.build();

2. Operation: The operation being tested. Here is where the operation is invoked.
Date cacheExpiration = manager.getExpirationDate(cacheKey, 0L);

3. Assertion: Here is where you specify the behaviour expected. The more readable this part is, the better. Using DSL-style code is probably the best way to express the intent of the test.
assertThat(dateOf(cacheExpiration).is(2).hoursAhead().from(systemDate));

In this post I went backwards. I've started from a messy test class and refactored it to a more readable implementation. As many people now are doing TDD, I wanted to show how we can improve an existing test. For new tests, I would suggest that you start writing the tests following the Context >> Operation >> Assertion approach. Try writing the test code in plain English. Once the test intent is clear, start replacing the plain English text with Java internal DSL code, keeping the implementation out of the test class.

PS: The ideas for this blog post came from a few discussions I had during the Software Craftsmanship Round-table meetings promoted by the London Software Craftsmanship Community (LSCC).

Tuesday, 7 December 2010

A basic ActiveRecord implementation in Java

Recently I was looking for different ways to develop applications in Java and thought that it would be interesting trying to use ActiveRecord in my persistence layer instead of implementing the traditional approach with a DAO. My idea is not to create my own ActiveRecord framework but use the existing ones as a support for this approach.

The scope: Write functional tests that could prove that the methods save and delete on an entity work. I'll use an entity called Traveller for this example.

The technology: I chose to use the following frameworks: Spring 3.0.5, JPA 2.0, Hibernate 3.5.3, AspectJ 1.6.9, JUnit 4.8.2, Maven 2.2.1, Eclipse Helios and MySQL 5.x

I'll be omitting things that are not too important. For all the details, please have a look at the whole source code at:

https://github.com/sandromancuso/cqrs-activerecord

Let's start with the test class:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(
locations={
"file:src/test/resources/applicationContext-test.xml",
"file:src/main/resources/applicationContext-services.xml"
})
@TransactionConfiguration(transactionManager = "myTransactionManager", defaultRollback = true)
@Transactional
public class TravellerActiveRecordIntegrationTest extends BaseTravellerIntegration {

@Test public void
testTravellerSelfCreation() {
assertThereAreNoTravellers(named("John"), from("England"));

Traveller traveller = aTraveller().named("John").from("England").build();
traveller.save();

assertThereIsASingleTraveller(named("John"), from("England"));
}

@Test public void
testTravellerEdition() {
Traveller traveller = aTraveller().named("John").from("England").build();
traveller.save();

traveller.setName("Sandro");
traveller.setCountry("Brazil");
traveller.save();

assertThereAreNoTravellers(named("John"), from("England"));
assertThereIsASingleTraveller(named("Sandro"), from("Brazil"));
}

@Test public void
testDeleteTraveller() {
Traveller traveller = aTraveller().named("John").from("England").build();
traveller.save();

traveller.delete();
assertThereAreNoTravellers(named("John"), from("England"));
}

}

A few things to notice about this test class:
- As this test is meant to insert and delete from the database, I set it to always rollback the transaction after each test, meaning that nothing will be committed to the database permanently. This is important in order to execute the tests multiple times and get the same results.
- The test class extends a base class, that has the methods assertThereAreNoTravellers and assertThereIsASingleTraveller. The advantage of doing that is that you separate what you want to test from how you want to test, making the test class clean and focused on the intention.
- Note that I also used the Builder pattern to build the traveller instance instead of using the setter methods. This is a nice approach in order to make your tests more readable. 

The Traveller entity implementation

So now, let's have a look at the Traveller entity implementation.

@Entity
@Table(name="traveller")
@EqualsAndHashCode(callSuper=false)
@Configurable
public @Data class Traveller extends BaseEntity {

@Id @GeneratedValue(strategy=GenerationType.AUTO)
private long id;
private String name;
private String country;

}

The Traveller class is a normal JPA entity as you can see by the @Entity, @Table, @Id and @GenereratedValue annotations. To reduce the boiler place code like getters, setters, toString() and hashCode(), I'm using Lombok, that is a small framework that generate all that for us. Just add a @Data annotation.

The real deal here is Traveller's super class BaseEntity (for a lack of inspiration to find better name). Let's have a look:

@Configurable
public abstract class BaseEntity {

@PersistenceContext
protected EntityManager em;

public abstract long getId();

public void save() {
if (getId() == 0) {
this.em.persist(this);
} else {
this.em.merge(this);
}
this.em.flush();
}

public void delete() {
this.em.remove(this);
this.em.flush();
}

public void setEntityManager(EntityManager em) {
this.em = em;
}

}

The BaseEntity class has quite a few things that can be discussed.

The save() method: I've chosen to have a single method that can either insert or update an entity, based on its id. The problem with this approach is that, firstly, the method has more than one responsibility, making it a bit confusing to understand what it really does. Secondly it relies on the entities id, that needs to be a long, making it a very specific and weak implementation. The advantage is that from the outside (client code), you don't need to worry about the details. Just simple call save() and you are done. If you prefer a more generic and more cohesive implementation, make the getId() method return a generic type and split the save() method into a create() and update() methods. My idea here was just to make it simple to use.

EntityManager dependency: Here is where the biggest problem lies. For it to work well, every time a new instance of a entity is created, either by using new EntityXYZ() by hand or when it is created by a framework (e.g. as a result of a JPA / Hibernate query), we want the entity manager to be injected automatically. The only way I found to make it work is using aspects with AspectJ and Spring.

Configuring AspectJ and Spring

My idea here is not to give a full explanation about the whole AspectJ and Spring integration, mainly because I don't know it very well myself. I'll just give the basic steps to make this example work.

First add @Configurable to the Entity. This will tell that the entity will be managed by Spring. However, Spring is not aware of instances of the entity, in this case, Traveller, being created. This is why we need AspectJ. The only thing we need to do is to add the following line to our Spring context xml file.

<context:load-time-weaver />

This makes AspectJ intercept the creation of beans annotated with @Configurable and tells Spring to inject the dependencies. In order to the load-time weaver (LTW) work, we need to override JVM's class loading so that the first time our entity classes are loaded, AspectJ can kick in, the @Configurable annotation is discovered and all the dependencies are injected. For that we need to pass the following parameter to the JVM:

-javaagent:<path-to-your-maven-repository>/.m2/repository/org/springframework/spring-instrument/3.0.5.RELEASE/spring-instrument-3.0.5.RELEASE.jar

The snippet above is what we must use if using Spring 3.0.x. It works fine inside Eclipse but apparently it has some conflicts with Maven 2.2.1. If you run into any problems, you can use the old version below.

-javaagent:<path-to-your-maven-repository>/.m2/repository/org/springframework/spring-agent/2.5.6/spring-agent-2.5.6.jar

Another thing that is a good idea is to add the aop.xml file to your project, limiting the classes that will be affected by AspectJ. Add the aop.xml to src/main/resources/META-INF folder.




<include within="com.lscc.ddddemo.model.entity.*" />
<include within="com.lscc.ddddemo.model.entity.builder.*" />
<exclude within="*..*CGLIB*" />






NOTE: The exclude clause is important to avoid some conflicts during the integration test. AspectJ sometimes tries to do some magic with the test classes as well causing a few problems and the exclude clause avoids that.

Making the integration test work

I'll be using MySQL, so I'll need a database with the following table there:

CREATE TABLE `traveller` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(45) NOT NULL,
`country` varchar(45) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB

As I'm using JPA 2.0, we need a persistence.xml that must be located at src/main/resources/META-INF




com.lscc.ddddemo.model.entity.Traveller


<property name="hibernate.show_sql" value="true" />
<property name="hibernate.format_sql" value="true" />

<property name="hibernate.connection.driver_class" value="com.mysql.jdbc.Driver" />
<property name="hibernate.connection.url" value="jdbc:mysql://localhost:3306/lscc-ddd" />
<property name="hibernate.connection.username" value="root" />

<property name="hibernate.c3p0.min_size" value="5" />
<property name="hibernate.c3p0.max_size" value="20" />
<property name="hibernate.c3p0.timeout" value="300" />
<property name="hibernate.c3p0.max_statements"
value="50" />
<property name="hibernate.c3p0.idle_test_period"
value="3000" />





In our Spring configuration, we also need to tell Spring how to create an EntityManager, providing the EntityManager Factory:




<property name="persistenceUnitName" value="testPU" />


<tx:annotation-driven transaction-manager="myTransactionManager" />


<property name="entityManagerFactory" ref="entityManagerFactory" />




In order to separate unit tests and integration tests in my Maven project, I've added a different profile for the integration tests in my pom.xml:


with-integration-tests



org.apache.maven.plugins
maven-surefire-plugin

always
-javaagent:/Users/sandro.mancuso/.m2/repository/org/springframework/spring-agent/2.5.6/spring-agent-2.5.6.jar

2.5
true


integration-tests
integration-test

test



**/common/*


**/*IntegrationTest.java










So now, if you want to run the integration tests from the command line, just type:

mvn clean install -Pwith-integration-tests

If running the tests from inside Eclipse, don't forget to add the -javaagent paramenter to the VM.

Conclusion

So now, from anywhere in your application you can do some thing like:

Traveller traveller = new Traveller();
traveller.setName("John");
traveller.setCountry("England");
traveller.save();

The advantage of using the ActiveRecord:
- Code becomes much simpler;
- There is almost no reason for a DAO layer any more;
- Code is more explicit in its intent.

The disadvantages:
- Entities will need to inherit from a base class;
- Entities would have more than one responsibility (against the Single Responsibility Principle);
- Infrastructure layer (EntityManager) would be bleeding into our domain objects.

In my view, I like the simplicity of the ActiveRecord pattern. In the past 10 years we've been designing Java web applications where our entities are anaemic, having state (getters and setters) and no behaviour. So at the end, they are pure data structures. I feel that entities must be empowered and with techniques like that, we can do it, abstracting the persistence layer.  

I'm still not convinced that from now on I'll be using ActiveRecord instead of DAOs and POJOs but I'm glad that now there is a viable option. I'll need to try it in a real project, alongside with Command Query Responsibility Segregation (CQRS) pattern to see how I will feel about it. I'm really getting sick of the standard Action/Service/DAO way to develop web applications in Java instead of having a proper domain model.

By the way, to make the whole thing work, I had loads of problems to find the right combination of libraries. Please have a look at my pom.xml file for details. I'll be evolving this code base to try different things so when you look at it, it may not be exactly as it is described here.

https://github.com/sandromancuso/cqrs-activerecord

Interesting links with more technical details:
http://nurkiewicz.blogspot.com/2009/10/ddd-in-spring-made-easy-with-aspectj.html
http://blog.m1key.me/2010/06/integration-testing-your-spring-3-jpa.html

Wednesday, 1 December 2010

Routine Prediction

Not everybody remembers how easy it is and how effective it can be to add (or correct) a few points at the beginning of the FID. Two years ago I explained how Linear Prediction works and how we can extrapolate the FID in both directions. This time I will show a simple practical application.
I have observed that, in recent years, C-13 spectra acquired on Varian instruments require a much-larger-then-it-used-to-be phase correction. When I say correction, I mean first-order phase correction, because the zero-order correction is merely a different way of representing the same thing (a different perspective).
A large first-order phase correction can be substituted with linear prediction. I will show the advantage with a F-19 example, yet the concept is general.

The spectrum, after FT and before phase correction, looks well acquired. Now we apply the needed correction, which amounts to no less than 1073 degrees.

Have you noticed what happened to the baseline? It's all predictable. When you increase the phase correction, the baseline starts rolling. The higher the phase correction, the shorter the waves. With modern method for correcting the baseline we eliminate all the waves, yet there are two potential problems: 1) The common methods for automatic phase correction will have an hard time. 2) If you prefer manual phase correction, you need an expert eye to assess the symmetry of the peaks over such a rolling baseline. Anyway, just to show you that linear prediciton is not a necessity, here is the spectrum after applying standard baseline correction:

Now let's start from the FID again, this time applying linear prediction. one way to use it is to add the 3 missing points at the beginning. The result, after a mild phase correction (<10°) and before the baseline correction is:

The lesson is: by adding the missing points we correct the phase.
Alternatively we can both add the 3 missing points and recalculate the next 4 points. In this way the baseline improves a lot:

The lesson is: by recalculating the first few points of the FID we can correct the baseline of the frequency domain spectrum.