Wednesday, 29 May 2013

Estimating Might Be Broken, But It’s Not Evil


Ron Jeffries's essay Estimation is Evil talks about how absurd estimating can be on a software project, and the nightmare scenarios that teams can end up in:


…Then we demand that the developers “estimate” when they’ll be done with all this stuff. They, too, know less about this product than they ever will again, and they don’t understand most of these requirements very well. But they do their best and they come up with a date. Does the business accept this date? Of course not! First of all, it’s only an estimate. Second, clearly the developers would leave themselves plenty of slack—they always do. Third, that’s not when we want it. We want it sooner.



So we push back on the developers’ estimate, leaning harder and harder until we’re sure we’ve squeezed out all the fat they left in there. Sometimes we even just tell them when they have to be done.



Either way, the developers leave the room, heads down, quite sure that they have again been asked to do the impossible. And the business people write down the date: “Development swears they’ll be done November 13th at 12:25PM.”

Software Estimation is Broken



Software Estimation – the way that most of us do it – is broken. As an industry we’re bad at estimating, we've been bad at it for a long time, and there’s no evidence that we’re getting much better at it.


Developers know this. The business knows this – so they don’t trust what the development team comes up with, and try to make their own plans. Management knows this too, so they work around estimates (I’ll take everything and double it), or worse they abuse estimates, cut them to the bone, and then use them as a lever to drive the team towards an unachievable goal.


Jeffries says that even teams who are trying to estimate properly are excessively concerned with predictability (and all of the overheads and navel gazing that come with trying to be predictable), when they really should be working on getting the right things done as soon as possible, which is all that the business actually cares about.


So because it’s hard, and because we’re not good at it, and because some people ignore estimates or abuse them, we should stop estimating at all.


As developers what we need to do is make sure that we understand what the most important thing is to the business, break the problem down into the smallest pieces possible and start iterating right away, deliver something and then go onto the next most important thing, and keep going until the business gets what they really need. If you can’t convince “them” (the sponsors) that this is the right way to work, then go through the theatre of estimating to get the project approved (knowing that whatever you come up with is going to be wrong and anyways management and the business are going to ignore it or use it against you), and then get to real work: understand what the most important thing is to the business, break the problem down into the smallest pieces possible and start iterating right away. In other words


“Stop estimating. Start Shipping”.


Martin Fowler wrote a recent post PurposeofEstimation where he says estimates are needed only if they help you make “significant decisions”. His examples are getting resources allocated (portfolio management go/no go – the game Jeffries describes above), and coordination, where your team's work needs to fit in with other teams (although he talks only about coordinating with other developers, ignoring the need to coordinate with customers, partners and other projects that have nothing to do with software development). There are many other times when estimates are needed: delivering to fixed-price contracts, budgeting for programs, when you need to hit a drop dead date (for example, when an industry-wide change is going to happen whether you are done or not).


The rest of the time, if your team is small enough and they know what they’re doing and they’re working closely with the business and delivering working software often, then nobody cares all that much about estimating – which to be honest is how my team does a lot of our work.


But this is a problem-solving approach, not a project management approach.

If you don’t know what you are building, why estimate?


It can work for online startups and other exploratory development: you have an idea that you think is good, but you’re not sure of the details, what people are going to like and what will catch on. If you don’t really know what you are building, then there’s no point trying to estimate how long it is going to take. Somebody will decide how much money you can spend, you start simple, deliver something useful and important (“minimum viable product”) as soon as you can so you can get feedback and keep iterating until hopefully enough people are using it and are telling you what you really need to build, or you run out of money.


We’re back to Evolutionary Prototyping, but with a strict focus on minimal features and prioritization (do only what’s important, don’t even try to consider the entire project because you may not have to deliver it all anyways), and fast feedback loops. Now it’s called “The Lean Startup” method.

If you are going to change it again tomorrow, why estimate?



Working this way also makes sense for a lot of online businesses: Facebook, Twitter, Linkedin, Etsy Netflix all work this way today. They are constantly delivering small features, or maybe breaking bigger changes into small steps and pushing them out incomplete but “dark” as soon as they can (often several times a day), constantly fiddling with the UI, adding personalization features and new channels and integrating with new partners and continuously capturing behavioural data so that they can tell what changes users like or at least what they are willing to put up with, trying out new ideas and running A/B tests knowing that some or most of these ideas will fail.


This work can be done by small teams working independently, so the size and cost of each “project” is small. Marketing wants to add a new interface or run a small experiment of some kind (the details are fuzzy, we’re going to have to iterate through it), it will probably only take a few weeks or maybe a month or two, just get a few smart people together and you see how fast you can get something working. If it doesn't work, or it’s taking too long, cancel it and go on to the next idea. It’s an attention-deficit way of working, but if you are chasing after new customers or new revenue sources and your customers will put up with you experimenting with them, it can work.

Don’t bother estimating, just make it work



And routine maintenance (anything that doesn't have a fixed/drop-dead end date) can be done this way too. David Anderson’s most persuasive arguments in favor of Kanban (working without estimates and continuously pushing out individual pieces of work) are in streamlining maintenance and operations work.


The business doesn't care that much about how this kind of work is managed – they just want important things fixed ASAP, and at the least possible cost. Most of this work is done by individuals or small teams, so again the cost of any piece of work is small. Instead of wasting time trying to estimate each change or fix upfront, you assume everything takes 3 days or whatever to do, and if you aren't finished at 3 days, then stop and escalate it to management, let them review and decide whether the work needs more scoping or should be done at all. Focus on getting things done and everybody is happy.

Why bother estimating – it’s just another mobile app


And it works for mobile app development, again where most work is done by small teams and most of the focus is on the user experience, where the uncertainties are more around what customers are going to like (the product concept, the look-and-feel – which means lots of iterative design work, prototyping and usability testing... and this is going to take how long?) and not on technical risks or project risks.

But you can’t run a project without estimating



Yes a lot of work is done and can be done in small projects by small teams and if the project is small enough and short enough then you may not need to bother much or at all with estimating – because you’re not really running a project, you’re problem solving.


But this way of working doesn't scale up to large organizations running large projects and large programs with significant business and technical risks which need to be managed throughout, and work that needs to be coordinated between different teams doing lots of different things in different places at different times, with lots of handoffs and integration points and dependencies. This is where predictability – knowing where you are and seeing ahead to where you are going to be (and where everybody else is going to be) with confidence – is more important than minimizing cycle time and rapid feedback and improvisation.


It comes down to whether you need to deliver something as a big project or you can get away with solving many smaller problems instead. While there is evidence that software development projects are getting shorter on average (because people have learned that smaller projects fail less often or at least fail faster), some problems are too big to be solved piecemeal. So estimating isn’t going to go away – most of us have to understand estimating better and get better at doing it.



#NoEstimates – building software without estimating – is like Agile development in the early days. Then it was all about small teams of crackerjack programmers delivering small projects quickly (or not delivering them, but still doing it quickly) and going against the grain of accepted development methods. Agile didn’t scale, but it got a lot of buzz and achieved enough success that eventually many of the ideas crossed into the mainstream and we found ways to balance agility and discipline, so that now Agile development methods are being used successfully in even large scale programs. I don’t see how anyone can successfully manage a big project without estimating, but that doesn't mean that some people aren't going to try – I just wouldn't want to be around when they try it.

Sunday, 26 May 2013

15 anni di Gran Turismo: intervista a Kazunori Yamauchi e Jim Ryan


Come molti di voi sapranno, la settimana scorsa si è alzato il sipario su Gran Turismo 6, il nuovo episodio della leggendaria serie automobilistica di Polyphony Digital, atteso quest’anno su PlayStation 3.





Mentre i giornalisti di tutto il mondo stavano scoprendo il gioco durante l’evento di Silverstone, PlayStation Blog ha avuto modo di conversare con Kazunori Yamauchi, CEO di Polyphony, e con Jim Ryan, CEO di Sony Computer Entertainment Europe, riguardo i quindici anni di successi raccolti da GT, con uno sguardo alle future innovazioni che la serie apporterà al genere.








Qual è stato il tuo primo incontro con la serie Gran Turismo?


Jim Ryan: Io ero un grande appassionato di Motor Toon Grand Prix. Mi trovavo nella loro sede di Tokyo, non ricordo come si chiamasse al tempo lo studio, e qualcuno mi disse: “Vieni a vedere il nuovo gioco di Kazunori”. Ero abituato a un gioco di corse colorato e divertente, per cui dopo aver provato quella versione preliminare di GT pensai: “Beh, è molto bello, soprattutto tecnicamente e dal punto di vista grafico, ma non so se potrebbe funzionare. Motor Toon GP era fantastico!”.





Naturalmente mi sbagliavo, mentre Kazunori aveva completamente ragione! Se avesse continuato a lavorare a Motor Toon GP, ora non saremmo qui a festeggiare i settanta milioni di copie di Gran Turismo vendute in tutto il mondo…












Quale pensi sia stato il momento più importante per la serie?


JR: Mi viene in mente il lancio di PS2. Dopo tredici anni e 170 milioni di unità prodotte, è diventato il più grande successo nella storia delle console domestiche. In realtà, all’inizio in Europa abbiamo incontrato delle difficoltà. Il prezzo era elevato e le vendite stentavano. Poi arrivò GT3. Si presentava con questa elegante copertina rossa e lo vendemmo in bundle con l’hardware. Le vendite di PS2 schizzarono alle stelle senza mai rallentare. GT3 ebbe un impatto enorme, davvero enorme sulle sorti di PS2. È quello il momento a cui sono più legato.








Come ti spieghi il suo successo costante negli anni, a differenza di tante altre serie automobilistiche che vanno e vengono?


JR: La qualità dell’esperienza di guida, a mio avviso, è molto più elevata di quella offerta dalle piattaforme rivali. Sia dal punto di vista della guida, sia per la fedeltà con cui sono riprodotti i tracciati, non esiste sul mercato nulla di paragonabile.





Non c’è nulla di più importante della qualità assoluta e dell’ossessiva attenzione al dettaglio. Se persegui questi obiettivi, le tue probabilità di successo aumentano esponenzialmente. Uno dei grandi pregi di Yamauchi-san è la tenacia con cui insegue la perfezione.








Kazunori, sono quasi venti anni che lavori a Gran Turismo. È difficile mantenere ancora oggi la stessa concentrazione ed energia che avevi quando hai realizzato il primo?


Kazunori Yamauchi: Il nostro principale obiettivo è migliorare costantemente ciò che abbiamo di fronte. Richiede molto lavoro, ma è quello che più ci piace fare. È anche molto divertente. Penso che se continueremo a impegnarci al massimo, alla fine otterremo sempre un grande risultato. Forse vedo la cosa in termini fin troppo semplicistici, ma il concetto è questo.












Cosa pensi avrebbe detto il giovane Kazunori Yamauchi impegnato nello sviluppo del primo Gran Turismo se avesse potuto vedere GT6?


KY: Penso direbbe che il suo sogno è diventato realtà! Questo è il tipo di gioco che avrebbe voluto giocare, e realizzare, allora.





Negli anni non abbiamo mai smesso di migliorarci. Non è facile riuscirci, per quanto ci si provi. Ci riteniamo fortunati perché negli ultimi quindici anni siamo in effetti riusciti in questa impresa, oltretutto con lo stesso team. Se si considera il settore dei videogiochi nel suo complesso, sono pochissime le aziende in cui lo stesso team ha lavorato per anni a un unico titolo.








Perché Gran Turismo 6 sarà pubblicato per PS3 e non per PS4?


JR: Su PS one ci sono stati GT1 e GT2, su PS2 GT3 e GT4, mentre per PS3 è uscito GT5, accanto al quale resta uno spazio vuoto. La differenza tra Gran Turismo e GT2 è nettissima, nonostante girino sullo stesso sistema. Anche da GT3 a GT4 è stato compiuto un notevole passo in avanti. Siamo sicuri che all’uscita di GT6 noterete un’evoluzione altrettanto significativa rispetto a GT5. PS3 ha ancora un grande potenziale da offrire a uno sviluppatore come Polyphony.


Un altro motivo è la base installata di settanta milioni di PS3. Il giorno del lancio, PS4 avrà una base installata di zero unità. Ci saranno molti altri giochi che spingeranno le vendite di PS4, non ultimo il gioco di guida Driveclub di Evolution Studios, un team che ha alle spalle numerosi successi.








Il realismo grafico ha raggiunto livelli talmente elevati da rendere sempre più difficile notare grosse differenze tra i diversi episodi di una stessa serie. Quanto sarà difficile stupire i giocatori con GT6?


KY: Questo è vero. L’hardware è giunto alla sua fase di maturità ed è molto difficile distinguere dal punto di grafico le novità. I professionisti riescono a farlo, ma i giocatori occasionali probabilmente no. Per quanto riguarda GT, noi siamo sempre stati appassionati di nuove tecnologie e non smetteremo mai di elaborare e implementare nuove soluzioni, sia per migliorare la versione PS3 sia nel lavorare a una versione PS4. Il segreto di GT6 è la commistione tra “reale” e “virtuale”, che permette di creare qualcosa di totalmente nuovo e affascinante fondendo insieme due settori completamente diversi. Più lavori su questo aspetto, più il risultato diventa interessante.





Il punto non è solo ridurre il divario tra reale e virtuale. L’aspetto più interessante è quando il “reale” inizia a influire sul “virtuale” e viceversa. Il contributo reciproco tra questi due mondi è ciò su cui ci concentriamo durante lo sviluppo di GT e ciò che rende GT così speciale.





GT Academy ne è un esempio. Non si tratta soltanto di addestrare dei piloti. Per certi versi stiamo ridefinendo il concetto di pilota da corsa. Stiamo esplorando nuovi territori e reinventando i meccanismi del mondo delle corse.








Circa metà delle vendite complessive di Gran Turismo sono avvenute in Europa. Perché è così popolare da noi?


JR: La risposta va ricercata nella diversa natura dei mercati. Quello americano è composto in prevalenza da videogiocatori appassionati, che preferiscono sparatutto e avventure d’azione come GTA, mentre il mercato europeo tende a essere più giovane e con una maggiore presenza di giocatori occasionali. Un simile bacino demografico si presta particolarmente bene a un gioco di guida come GT, che è adatto a entrambi i sessi e a tutte le età.








Per concludere, Jim, qual è la tua auto preferita quando giochi con GT?


JR: Uso un po’ di tutto! Ammetto che l’automobilismo non è una delle mie più grandi passioni. Amo il gioco, ma non sono altrettanto appassionato di automobilismo reale… La velocità non fa per me. Guido raramente e vado piano!

Wednesday, 22 May 2013

7 Agile Best Practices that You Don’t Need to Follow


There are many good ideas and practices in Agile development, ideas and practices that definitely work: breaking projects into Small Releases to manage risk and accelerate feedback; time-boxing to limit WIP and keep everyone focused; relying only on working software as the measure of progress; simple estimating and using velocity to forecast team performance; working closely and constantly with the customer; and Continuous Integration – and Continuous Delivery – to ensure that code is always working and stable.



But there are other commonly accepted ideas and best practices that aren’t important: if you don’t follow them, nothing bad will happen to you and your project will still succeed. And there are a couple that you are better off not following at all.



Test-Driven Development



Teams that need to move quickly need to depend on a fast, efficient testing safety net. With Test First Development or Test-Driven Development (TDD), there’s no excuse for not writing tests – after all, you have to write a failing test before you write the code. So you end up with a good set of working automated tests that ensure a high level of coverage and regression protection.



TDD is not only a way of ensuring that developers test their code. It is also advocated as a design technique that leads to better quality code and a simpler, cleaner design.



A study of teams at Microsoft and IBM (Realizing Quality Improvement through Test Driven Development, Microsoft Research, 2008) found that while TDD increased upfront development costs between 15-35% (TDD demands developers change the way that they think and work, which slows developers down, at least at first), it reduced defect density by 40% (IBM) or as much as 60-90% (Microsoft) over teams that did not follow disciplined unit testing.



But in Making Software Chapter 12 “How Effective is Test-Driven Development” researchers led by Burak Turhan found that while TDD improves external quality (measured by one or more of test cases passed, number of defects, defect density, defects per test, effort required to fix defects, change density, % of preventative changes) and can improve the quality of the tests (fewer mistakes in the tests, tests that are easier to maintain), TDD does not consistently improve the quality of the design. TDD seems to reduce code complexity and improve reuse, however it also negatively impacts coupling and cohesion. And while method and class-level complexity is better in code developed using TDD, project/package level complexity is worse.




People who like TDD like it a lot, so if you like it, do it. And even if you are not TDD-infected, there are times when working test first is natural – when you have to solve a specific problem in a specific way, or if you’re fixing a bug where the failing test case is already written up for you. But the important thing is that you write a good set of tests and keep them up to date and run them frequently – it doesn't matter if you write them before, or after, you write the code.



Pair Programming



According to the VersionOne State of Agile Development Survey 2012, almost 1/3 of teams follow pair programming – a surprisingly high number, given how disciplined pair programming is, and how few teams follow XP (2%) or Scrum/XP Hybrid (11%) methods where pair programming would be prescribed.


There are good reasons for pairing: information sharing and improving code quality through continuous, informal code reviews as developers work together. And there are natural times to pair developers, or sometimes developers and testers, together: when you’re working through a hard design problem; or on code that you’ve never seen before and somebody who has worked on it is available to help; or when you’re over your head in troubleshooting a high-pressure problem; or testing a difficult part of the system; or when a new person joins the team and needs to learn about the code and coding practices.


Some (extroverted) people enjoy pairing up, the energy it creates and the opportunities it provides to get to know others on the team. But forcing people who prefer working on their own or who don’t like each other to work closely together is definitely not a good idea. There are real social costs in pairing: you have to be careful to pair people up by skill, experience, style, personality type and work ethic. And sustained pair programming can be exhausting, especially over the long term – one study (Vanhanen and Lassenius 2007) found that people only pair between 1.5 and 4 hours a day on average, because it’s too intense to do all day long.



In Pair Programming Considered Harmful? Jon Evans says that pairing can have also negative effects on creativity:


Research strongly suggests that people are more creative when they enjoy privacy and freedom from interruption … What distinguished programmers at the top-performing companies wasn’t greater experience or better pay. It was how much privacy, personal workspace and freedom from interruption they enjoyed,” says a New York Times article castigating “the new groupthink”.



And in “Still Questioning Extreme Programming” Pete McBreen points out some other disadvantages and weaknesses of pair programming:



  • Exploration of ideas is not encouraged, pairing makes a developer focus on writing the code, so unless there is time in the day for solo exploration the team gets a very superficial level of understanding of the code.

  • Developers can come to rely too much on the unit tests, assuming that if the tests pass then the code is OK. (This follows on from the lack of exploration.)

  • Corner cases and edge cases are not investigated in detail, especially if they are hard to write tests for.

  • Code that requires detail thinking about the design is hard to do when pairing unless one partner completely dominates the session. With the usual tradeoff between partners, it is hard to build technically complex designs unless they have been already been worked out in a solo session.

  • Personal styles matter when pairing, and not all pairings are as productive as others.

  • Pairs with different typing skills and proficiencies often result in the better typist doing all of the coding with the other partner being purely passive.

And of course pairing in distributed teams doesn't work well if at all (depending on distance, differences in time zones, culture, working styles, language), although some people still try.



While pairing does improve code quality over solo programming, you can get the same improvements in code quality, and at least some of the information sharing advantages, through code reviews, at less cost. Code reviews – especially lightweight, offline reviews – are easier to schedule, less expensive and less intrusive than pairing. And as Jason Cohen points out even if developers are pair programming, you may still need to do code reviews, because pair programming is really about joint problem solving, and doesn’t cover all of the issues that a code review would.



Back to Jon Evans for the final word on pair programming:


The true answer is that there is no one answer; that what works best is a dynamic combination of solitary, pair, and group work, depending on the context, using your best judgement. Paired programming definitely has its place. (Betteridge’s Law strikes again!) In some cases that place may even be “much of most days.” But insisting on 100 percent pairing is mindless dogma, and like all mindless dogma, ultimately counterproductive.



Emergent Design and Metaphor




Incremental development works, and trying to keep design simple makes good sense, but attempting to define an architecture on the fly is foolish and impractical. There’s a reason that almost nobody actually follows Emergent Design: it doesn't work.


Relying on a high-level metaphor (the system is an "assembly line" or a "bill of materials" or a "hive of bees") shared by the team as some kind of substitute for architecture is even more ridiculous. Research from Carnegie Mellon University found that


… natural language metaphors are relatively useless for either fostering communication among technical and non-technical project members or in developing architecture.


Almost no one understands what a system metaphor is any ways, or how it is to be used, or how to choose a meaningful metaphor or how to change it if you got it wrong (and how you would know if you got it wrong), including one of the people who helped come up with the idea:

Okay I might as well say it publicly - I still haven't got the hang of this metaphor thing. I saw it work, and work well on the C3 project, but it doesn't mean I have any idea how to do it, let alone how to explain how to do it.


Martin Fowler, Is Design Dead?


Agile development methods have improved development success and shown better ways to approach many different software development problems – but not architecture and design.


Daily Standups



When you have a new team and everyone needs to get to know each other and more time to understand what the project is about; or when the team is working under emergency conditions trying to fix something or finish something under extreme pressure, then getting everyone together in regular meetings, maybe even more than once a day, is necessary and valuable. But whether everyone stands up or sits down and what they end up talking about in a meeting should be up to you.


If your team has been working well together for a while and everyone knows each other and knows what they are working on, and if developers update cards on a task board or a Kanban board or the status in an electronic system as they get things done, and if they are grown up enough to ask for help when they need it, then you don’t need to make them all stand up in a room every morning.



Collective Code Ownership



Letting everyone work on all of the code isn't always practical (because not everyone on the team has the requisite knowledge or experience to work on every problem) and collective code ownership can have negative effects on code quality.


Share code where it makes sense to do so, but realize that not everybody can – or should – work on every part of the system.

Writing All Requirements as Stories



The idea that every requirement specification can be written as User Stories in 1 or 2 lines on cards, that requirements should be too short on purpose (so that the developer has to talk to someone to explain what’s really needed)
and insisting that they should all be in the same template form

“As a type of user I want some goal so that some reason…”


is silly and unnecessary. This is the same kind of simple minded orthodoxy that led everyone to try to capture all requirements in UML Use Case format with stick men and bubbles 15 years ago.


There are many different ways to effectively express requirements. Sometimes requirements need to be specified in detail (when you have to meet regulatory compliance or comply with a standard or integrate with an existing system or implement a specific algorithm or…). Sometimes it’s better to work from a test case or a detailed use case scenario or a wire frame or some other kind of model, because somebody who knows what’s going on has already worked out the details for you. So pick the format and level of detail that works best and get to work.


Relying on a Product Owner


Relying on one person as the Product Owner, as the single solitary voice of the customer and the “one throat to choke” when the project fails, doesn't scale, doesn't last, and puts the team and the project and eventually the business at risk. It’s a naïve, dangerous approach to designing a product and to managing a development project, and it causes more problems than it solves.


Many teams have realized this and are trying to work around the Product Owner idea because they have to. To succeed, a team needs real and sustained customer engagement at multiple levels, and they should take responsibility themselves for making sure that they get what they need, rather than relying on one person to do it all.

Wednesday, 15 May 2013

Certified Agile: The PMI-ACP Exam

I sat for the Project Management Institute’s Agile Certified Practitioner (PMI-ACP) exam earlier this week. The PMI-ACP tests your understanding of common Agile development methods, values and practices. It focuses on basic Agile principles, and on Scrum and XP in detail, as well as fundamentals of Lean and Kanban.



Unlike the PMP, there is no Book of Knowledge which defines best practices and a process framework for this certification. Instead there is a certification content outline that explains at a high level the tools, techniques, knowledge and skills that you will be expected to know and will be tested on, and a reference list of books to read which includes some of the usual suspects. Out of this list I’d recommend reading Mike Cohn’s books on Agile Estimating and Planning and User Stories - they are useful for the exam and they're worth reading regardless. If you’re not working in an XP shop you should also read Kent Beck’s Extreme Programming Explained to make sure that you understand XP, and you must read up on the basics of Lean and Kanban. And of course you need to memorize the Agile Manifesto and the Twelve Principles of Agile Software Development front to back.




But I know from writing the PMP several years ago that experience and general reading aren’t enough to prepare for a PMI certification exam. PMI wants everyone who holds a certification to know the same things, and to share the same values and to think and act the same way. There’s an emphasis on orthodoxy – you’re tested not on what you would do (based on your experience and common practical knowledge), but what you should do according to PMI's definition of what “the right way" is to do something. And PMI’s exams are as much a test of your ability to read and write an exam as they are of the subject matter, with trick questions and trip-up answers and questions which are purposefully hard to understand, and even some extra questions thrown in which don’t make sense at all. Writing a test like this is not fun, although the PMI-ACP exam is certainly not as hard as the PMP exam - you shouldn’t need the 3+ hours that you’re given to complete this test.



So like others, I decided to use an exam prep guide to finish my studying.



The PMI-ACP Exam: How to Pass on Your First Try by Andy Crowe is a quick overview of the material that you should know for the exam. Easy to read and easy to follow, it defines key terms and “doing Agile right”, roles and responsibilities and rituals and tools, and covers communication and collaboration issues, and includes some sample questions (and access to a sample online exam). This is not an especially insightful book, but I found it useful for last minute review and cramming.



I did most of my studying with Mike Griffiths’ PMI-ACP Exam Prep: A Course in a Book for Passing the PMI Agile Certified Practitioner (PMI-ACP) Exam, a much more complete study guide, and a good overview of Agile development that is worth keeping and reading on its own. This book builds on materials that Griffiths published earlier on his blog and it is especially good on Agile reporting tools.



Griffiths is one of the experts who created the PMI-ACP program and so he understands what you need to know in depth, and he is a good writer. However, his book is harder to study from than Crowe’s, because it contains a lot more details and because it is structured around the artificial domains that PMI uses to describe Agile development. This results in several discontinuities, where an idea or practice is introduced under “Value Driven Delivery” and then continues later under “Adaptive Planning” or “Continuous Improvement” or one of the other domains (it is not necessary by the way to learn the domains for the exam).



If you have solid experience with Agile development (which you need to in order to meet the qualifying bar) especially Scrum and XP, you should be able to pass the exam with the help of Griffiths’ guide and some general reading to fill in gaps.



Studying for the PMI-ACP has made me examine Agile development ideas and practices in more detail (which is why I decided to apply for the certification). But it hasn't changed how I think about Agile practices and methods or how I think you should follow them. I am just as convinced today as I was before that the key is not following some method in a pure way, but instead to build your own toolkit, to borrow what works from different methods and adapt them to your specific requirements, constraints and situation. And the more that you know and understand about Agile methods and practices, the more tools you have for your toolkit.


Monday, 13 May 2013

Speciale: 15 anni di Gran Turismo







Il 2013 segna il quindicesimo anniversario di Gran Turismo su PlayStation.

Per festeggiare la ricorrenza, ripercorreremo la prestigiosa storia del simulatore di guida realistico.




Nel 1998, la copertina dell'edizione europea di Gran Turismo per PlayStation ritraeva una misteriosa supercar nascosta da un telo. Dietro si celava l'esordio del più grande successo nella storia di PlayStation, una serie che con la sua grafica e i suoi controlli ultrarealistici avrebbe ridefinito il genere dei giochi di guida.





Gran Turismo univa l'adrenalina delle gare arcade con il realismo di una simulazione. Il gioco includeva quasi duecento auto, un numero nettamente superiore a quello di qualunque altro titolo automobilistico dell'epoca. I giocatori potevano mettere alla prova le proprie capacità di guida per ottenere patenti con cui partecipare agli eventi, mentre i premi in denaro consentivano di acquistare le auto più esclusive al mondo, insieme a pezzi di ricambio con cui migliorarne le prestazioni. Un simile livello di dettaglio non aveva precedenti e consentì a Gran Turismo di travalicare i confini del videogioco, facendo parlare di sé anche nel mondo dell'automobilismo reale.







L'evoluzione dell'esperienza di guida






Gran Turismo 2, pubblicato due anni più tardi, portò il numero di auto disponibili all'incredibile cifra di 650. Merito dell'entusiasmo del creatore della serie, Kazunori Yamauchi, che con la sua profonda conoscenza delle auto classiche e moderne ha contribuito nel tempo a instaurare un rapporto di stretta collaborazione tra gli sviluppatori di Polyphony Digital e le case automobilistiche stesse.





"I rapporti con le case produttrici sono sempre stati fondamentali", spiega Yamauchi. "Questo perché le auto che inseriamo nei nostri giochi sono parte integrante della società e Gran Turismo punta da sempre a raggiungere anche chi non conosce il mondo dei videogiochi".





PlayStation 2 fu distribuita in Europa nel 2000, generando in tutto il mondo grande attesa per ciò che Polyphony sarebbe riuscita a creare con il nuovo hardware. La risposta giunse un anno più tardi con Gran Turismo 3: A-Spec, di fronte al quale un osservatore distratto stentava a credere si trattasse di un videogioco e non di una gara reale.







A tutta velocità






L'arrivo di Gran Turismo 4 fu anticipato da Gran Turismo 4 Prologue, che fornì un'anteprima del futuro della serie. Pubblicato nel 2005, Gran Turismo 4 includeva più di settecento auto di ottanta costruttori diversi, dalla Daimler Motor Carriage del 1886 a prototipi che immaginavano il futuro fino al 2022.





Fu introdotta anche la modalità B-Spec, che consentiva ai giocatori di passare dal ruolo di pilota a quello di caposquadra. Un'altra novità di Gran Turismo 4 erano le missioni di guida, che sfidavano i piloti a padroneggiare tecniche specifiche, come l'uso della scia.





I giocatori europei di Gran Turismo hanno iniziato a gareggiare online nel 2008 grazie a Gran Turismo 5 Prologue per sistema PlayStation 3. Quest'ultimo è stato il primo titolo della serie a includere funzionalità di rete come le gare online, le classifiche e GT-TV, un servizio che permetteva di guardare programmi automobilistici direttamente all'interno del gioco.







Si scende in pista




A Gran Turismo 5 Prologue va inoltre ascritto il merito di aver scoperto un potenziale talento come Lucas Ordoñez. Il vincitore della prima edizione di GT Academy, un concorso per i migliori giocatori europei di GT, ha successivamente conseguito la patente da gara e preso parte all'endurance della 24 Ore di Dubai, il premio in palio, e ora è un pilota professionista.





GT Academy è per Yamauchi un motivo di grande orgoglio: "Era un sogno che cullavo fin da quando ho sviluppato la mia prima simulazione di guida, soprattutto perché ero sicuro che un giorno lo avrei realizzato. E quando si è finalmente concretizzato nell'edizione 2008 di GT Academy, l'emozione è stata fortissima".





Nel 2009, Polyphony Digital ha esplorato nuove strade con Gran Turismo per PSP, che ha fatto breccia sul sistema di intrattenimento portatile con più di ottocento auto e trentacinque tracciati, oltre alle gare online e agli scambi di auto tra giocatori. Grazie alla sua profondità e spettacolarità, il gioco è diventato ben presto un grande successo tra gli appassionati delle console portatili.







Un nuovo punto di riferimento per il settore




L'attesissimo Gran Turismo 5 è stato pubblicato su PS3 nel novembre 2010, innalzando ancora una volta il livello di realismo delle auto e dei tracciati. Il gioco include più di mille auto, dai classici del passato alle utilitarie per famiglie, fino ad arrivare a bolidi da sogno come la Bugatti Veyron. I circuiti reali sono stati ricreati con un'incredibile dovizia di particolari e affiancano tracciati storici come il Nürburgring tedesco a circuiti cittadini da sogno, che consentono ai piloti di guidare tra le strade di Roma, Londra e Tokyo.





Dopo l'introduzione del gioco online nell'edizione Prologue, Gran Turismo 5 non si è limitato a offrire gare online per sedici giocatori, spingendosi ben oltre. Le prove a tempo speciali e gli eventi stagionali garantiscono un numero di sfide in continuo aumento, mentre con l'Editor Tracciati i giocatori possono creare percorsi personalizzati da condividere con la comunità.





In Gran Turismo 5 è stato dedicato ancora più spazio al mondo dell'automobilismo reale. Il pilota di Formula 1 Sebastian Vettel e il campione NASCAR Jeff Gordon compaiono come ospiti speciali, mentre il famoso programma automobilistico Top Gear ha prestato la sua celebre pista di prova. Dopo il debutto del 2008, inoltre, GT Academy ha raccolto un successo sempre maggiore, portando alla scoperta di sei talenti in tutta Europa e alla nascita di una squadra classificatasi al secondo posto nella categoria SP3 della 24 Ore di Dubai.





Dopo quindici anni, appare ormai evidente come Gran Turismo non abbia semplicemente cambiato il volto dei giochi di guida, ma anche quello del mondo delle corse in senso lato. È quindi doveroso lasciare l'ultima parola al primo responsabile del suo successo, Kazunori Yamauchi:





"Considero Gran Turismo una sorta di movimento. E sarei davvero felice se il movimento che ruota intorno a GT lasciasse un segno nella storia".



Via / it.playstation.com

Wednesday, 8 May 2013

Paper Pool - Coming soon for iOS and Android


We are excited to announce that our new game Paper Pool is coming soon for iOS and Android!

Ever wish you could play with the stars in the sky? Now you can! Equal parts planning and execution, Paper Pool is mini-golf combined with billiards and set in a lush panoramic world created from construction paper.

From the creators of the hit mobile game Drawdle, Paper Pool offers a fresh challenge for billiards experts and novices alike.

Screenshots and trailers coming soon, stay tuned.

Tuesday, 7 May 2013

Appsec – Can anything Stop the Bad Guys?

WhiteHat Security recently published their 2012 report on website security. Like Veracode, WhiteHat collects and analyzes data from security tests run across their customer base each year. WhiteHat's analysis focuses on data from dynamic testing of 15,000 sites at 650 organizations – all results manually reviewed and verified. From this data they are able to see trends and to build industry scorecards. The report makes for fascinating reading.



On average, web sites are getting more secure each year: the average web site had over 1,000 vulnerabilities in 2007, and only 56 in 2012. SQL injection, the most popular and most serious attack vector, is found in only 7% of their customer’s web sites.



This is the good news.




What made WhiteHat’s analysis this year especially valuable is that they also surveyed customers about their secure SDLC practices and the effectiveness of their security programs. Although the survey set was small (less than 20% of customers responded), this data allowed WhiteHat to correlate vulnerability data with secure SDLC practices operational controls, as well as appsec program drivers and breach data.


Compliance impact on Appsec




White Hat found that the main driver for fixing security vulnerabilities is compliance – this matches up with findings from the SANS Appsec survey last year.




But they also found that compliance is the number one reason that some vulnerabilities don’t get fixed: many organizations are following the letter of the law, doing what compliance says that they have to and only what they have to, not going any further even if it would make sense to do so from a risk management perspective or to meet customer demands.


Best Practices and Tools – What Works?



Training developers seems to help. More than half of White Hat’s customers had done at least some security training for developers. Organizations that invested in security training for developers had 40% fewer vulnerabilities and resolved them 59% faster.




But other best practices and tools don’t seem to be effective.




Just over half of customers relied on application libraries or frameworks with centralized security controls. Relying too much on these controls seems to provide a false sense of security: organizations that used security libraries or frameworks with security controls had 64% more vulnerabilities and resolved them 27% slower.




One factor that makes these organizations more vulnerable is that if the underlying framework is exploitable, then all of the sites that rely on it are vulnerable, like the recent security problems with Rails. Another problem may be that developers are naïve about what a security library will do for them: Apache Shiro or something like it for example will take care of a lot of application security problems, but it won’t protect your app from SQL injection or XSS or CSRF or other common attacks, leaving big holes for the bad guys. There’s more work that still needs to be done to make an application secure.




Organizations that use static analysis had 15% more vulnerabilities found through WhiteHat's dynamic testing, and resolved them on average 26% slower. Maybe because running a tool doesn't do anything if you don’t fix the vulnerabilities. Or because there isn't a high overlap between the vulnerabilities that static analysis finds and what’s found through dynamic analysis.



But Nothing Stops Breaches



85% of WhiteHat's customers test their apps pre-production, a third of them before every change is pushed out. These organizations are trying to do the right thing.




But almost one quarter of White Hat’s customers had experienced security breaches as a result of an application vulnerability. It doesn't seem to matter if they tested often, or if they trained their developers, or how much they trained them, or if they used use static analysis or secure libraries or a WAF or other operational security controls. These organizations were just as likely to experience a breach as organizations that didn't do as much training or as much testing or didn't use the tools.




WhiteHat’s report raises a lot of fascinating questions. Do the breach findings mean that security testing, or developer training or using secure libraries or other tools don’t work?




Or is this simply evidence of the essential asymmetry of the “Attacker’s Advantage and the Defender’s Dilemma”? Even though the number of serious vulnerabilities on average is declining significantly year on year, 86% of all the web sites that WhiteHat tested had at least one serious vulnerability (and keep in mind that WhiteHat - or any other vendor - can't catch every vulnerability). On average only 61% of these vulnerabilities were fixed and it took 193 days for this to get done. All it takes is one vulnerability for the bad guys to get in, and we’re still giving them too many chances and too much time to succeed.





Or maybe we just need more time to see the results of training and testing and tools and other best practices. Time for developers to understand and fix legacy bugs and to change how they design and build software to be more safe and secure in the first place, to “build security in”. Time for management to understand that compliance shouldn't be the main driver for building secure software. Time to raise the bar enough that the bad guys start looking for another, easier target. We’ll have to wait another year to see WhiteHat’s next report and see if some more time makes any real difference.


Monday, 6 May 2013

Announcement right around the corner

I've posted some screenshots of Sunset Pool back in March, but now we are close to being ready to make a formal announcement soon. Watch this space, and in the meantime here is a bonus screenshot of a new area:

(click to enlarge)

Saturday, 4 May 2013

Seven months later...

Hello!

It's been an absurd amount of time since I posted an update on here, during which not a lot of progress has been made on the PC versions of AJ1&2, or indeed much else programming-wise.

This is mostly due to my poor, abused back, which has spent several years propping up my arms and head while I tap away at the computer, with no back rest to support it. Turns out that this is VERY BAD, and the final stretch Apple Jack 2 pretty much knackered it completely, to the point where I had to lay off the computer work until I could make it stop bloody hurting all the time.

Thankfully, after getting a proper chair and doing stretches for several months it's finally back to normal and I can get on with finishing the conversions. No more testing is required at this stage, but thanks to everyone who has continued to offer help in that area.

Once THAT'S done, I can finally start to work on a new game! There are dozens of ideas ready to go, from the simple to the absurdly ambitious. The one I really want to do is a very peculiar shoot'em up, which would involve hooking up with a decent artist (due to the amount of drawing required), and an actor to play the part of a giant space orange. Brian Blessed would be the perfect man for the job, but he's a bit expensive and there's a lot of dialogue.

Other ideas include a stealth game, a one-button platformer, a 3D platformer, a body-hell beat'em up and a puzzle game. I've also got the set-up and plot of Apple Jack 3 nailed down, but I really want to work on something else first.

Progress should be swift on the PC conversions, so keep your scanners peeled for more updates.



P.S. To Futil1ty in the comments section of the previous entry - there isn't really a trick to completing 5-13, you just have to be good at wall jumping. It was actually even harder when the game was first released, but so many people got stuck I had to patch it. Think yourself lucky!