Reflections on Tech-Ed 2007 - Part 4
The previous posts in the series have covered highlights of the Tech-Ed sessions. This post continues the series with the discussion of a more technical session.
ARC302 –TDD Meets MVP
Ron Jacobs had entertained us as MC for the keynote presentation, so it was interesting to attend a session to find him in more of a university lecturer role providing valuable information and insight. As the last session I attended in the conference, it was interesting, stimulating and a superb finish to the whole event. A bit of a discalimer: the full title of his presentation also mentioned UX, but there was little coverage in this area apart from highlighting that there is no such thing as 'no design' - there is only good design and bad design.
The main focus of this presentation was to convey how the MVP pattern can be adjusted to easily support Test-Driven Development (TDD), a fairly recent concept. This session started with a few key points of TDD:
- TDD is mainly about design, rather than testing. The tests that are written form the specification of what the software should do, therefore running the tests provides confirmation that the software not only works as expected, but also meets the functional requirements.
- Tests should be written before code. Since the tests form the specification, it is logical for them to be written first. It doesn't make sense for the implementation to be done before the specification, and the specification to be confirmed afterwards - that would simply be a waste of effort. From personal experience on a project where the specification was updated after the implementation, this tactic carries a lot if risk and doesn't look professional.
- TDD aids design. By writing tests upfront, a clearer picture emerges of what the class should actually do. This defines what class members are needed, since the tests hook into these directly. It also means that no more functionality is built in than is necessary. If it isn't within the code coverage of the tests, it isn't needed.
- Refactoring is encouraged. The cycle of TDD is "Write test, write code, refactor". Refactoring is where duplication of effort is avoided and the code becomes streamlined as the need arises. It is also faster to make such steady progress towards a versatile implementation, than to try and do this upfront where one would sit and think. And think. And think some more. Then actually do something.
Now that TDD was understood, the meat of the session began. A brief history of architecture started, leading from monolithic applications to loose coupling with MVC, then to MVP (and its variants, Supervising Controller and Passive View). This led to the variant covered in the session, known as Presenter-First. (A 376kB PDF version of a conference paper on this can be found here)
In Presenter First, there is no direct interaction between the model and view – everything passes through the presenter. This is unlike standard MVP, where it is possible for notifications to be sent directly from the model to the view. The model and view are each encapsulated behind an interface, which forms a ‘contract’ used by the presenter. As a result, the model and view do not know anything about the presenter or each other.
Since communication from the model and view back to the presenter is via notifications (e.g. events), no public methods are required on the presenter. Instead, the presenter subscribes to the view and model’s events in its constructor, which accepts the model and view interfaces as parameters. This means that the event handlers each correspond to commands to act on, and therefore implement the corresponding interaction logic: ‘When [particular event happens], do [this action]’. Having personally tried this on one of my personal projects, I can say that this is very helpful for breaking down the complexity of the presentation / interaction logic (which is separate from the domain logic that is technically part of the model). This prevents the view from ‘getting too smart’ and therefore remains simple by:
- Updating the display upon instruction by the presenter
- Issuing commands to the presenter based on user input or other events (e.g. timers)
Here’s the process for breaking down the complexity with Presenter-First into easily manageable chunks:
- Establish the use cases / user stories of the application.
- Define the commands for the presenter to act on based on breaking down the use cases into steps (i.e. When X happens, do Y). This also allows the model and view interfaces to be defined.
- Flesh out the presenter, thus wiring the event handlers / commands to actions.
- Develop the model and use the interface defined to link to the presenter.
- Create the views using the interface defined. This basically involves mapping control event handlers to the notifications handled by the presenter, and wiring the interface methods to produce the output.
Of course, this may be done in an agile fashion in response to changing requirements. The model, view and presenter may also be developed in parallel.
A good thing about this presentation is that it also clarified the role of particularly the model and presenter. The presenter is called the presenter for a reason – to present the state of the model to the view and to act on commands issued by the view. It is not simply the holding bin where all the logic (domain and interaction / presentation) is kept. Likewise, the model isn’t just for persisting data to memory, files or databases; it is responsible for enforcing data integrity, applying domain logic and processing data. Therefore in the analysis and design of the application, it is important to distinguish between the domain logic from the interaction logic so that the interaction between the model and presenter is kept as simple as possible. Likewise, care has to be taken to avoid placing interaction logic in the view. Most complexity within the view should be devoted to defining how to transform the data provided by the presenter into the visual representation.