1. About Peopleware

    The book Peopleware by Tom DeMarco and Timothy Lister is a popular book on software project management. The first edition was from 1987 but the book has been revised in 2013. Peopleware addresses the issue of software developers management.

    This article is a short-list of what I take away from the book. The book is really easy to read and full of interesting insights, I recommend it to anyone working in software projects — even if you’re not into management.

    The authors note that a lot of software projects fail, and that in their experience, most of the time the failure is “people-related”. The main thesis is stated in the first chapter and reads:

    The major problems of our work are not so much technological as sociological in nature.

    The book is divided into 5 parts, but I think we can extract 2 main ideas:

    1. Productivity in a software project is quite different than in a physical product assembly line (eg Cheeseburger production),
    2. You will have better productivity by actually showing respect for and trust in your people.

    The authors explain the first point to support the second, and then expand on the multiple facets of what makes software developers work better.

    Productivity reassessed

    Software development is a work of the mind. It requires abstract thinking and creativity. On the opposite, making a Cheeseburger requires no novelty, and production can be standardized.

    When you replace a developer, you find a person with a different experience each time. Turnover is more costly than you think. On the contrary, a Cheeseburger maker is quite interchangeable.

    When developing software, quality matters a lot, because software is evolving and must be maintained. You can’t trade-off quality for productivity.

    What’s not so easy is keeping in mind an inconvenient truth like this one:

    People under time pressure don’t work better — they just work faster.

    In order to work faster, they may have to sacrifice the quality of the product and of their own work experience.

    How to care for software developers

    If you want to know all about it, go read the book. Here I will keep 5 points that I found particularly interesting, considering my short experience of software development.

    Respect their work

    • Allow, even encourage errors: they will feel more free to try new things and will learn in the process.
    • People’s self-esteem is tied to the quality of the work they produce: strive for quality.
    • Be open to change.
    • Create an atmosphere of safety, remove competition inside a team.
    • Avoid unnecessary dress codes, Methodologies, and bureaucracy, which make people feel that their appearance, or a Method (with a big M), or the politically-correct, is more important than their actual work.

    Chris Lema selected that last point as one of the Three Reasons Why High Performers Quit:

    Proper procedure is more valuable than high performance.

    Respect their time

    • Sometimes a developer has to stop coding and think. That’s OK.
    • Avoid useless meetings, etc.
    • Avoid impossible deadlines: your developers also want the work done, they’ll do it even without a deadline.
    • Be aware that working more is not being more productive. Respect your people’s work/personal life balance.

    Your people are very aware of the one short life that each person is allotted. And they know too well that there has got to be something more important than the silly job they’re working on.

    Let them work in peace

    The main idea is that building software involves building abstract ideas. The developer is then in a fragile state of flow.

    • Give them a nice, quiet place to think.
    • Don’t let them be interrupted in their work at any time (including with your own visits). Be mindful of the effect of meetings, telephone, emails, etc.

    The authors actually devote a full part of the book to the workspace. This is reinforced by Joel Spolsky, for instance in 2000 with Where do These People Get Their (Unoriginal) Ideas? and in 2006 with A Field Guide to Developers.

    Help them make progress

    • Trust people: let them be autonomous after you’ve entrusted them with your job.
    • Use deadlines so they see the work progresses in the right direction.
    • Grow teams: when people inside the team feel safe, with no competition, they’ll naturally coach each other and share knowledge.
    • Facilitate learning.


    • Make people feel involved.
    • Make them see the work progressing with small part goals.
    • Use tests for private self-assessment.
    • Avoid creating competition through bonuses tied to performance.


    The authors tell some stories and give supporting data to make their point. I think they might go a bit too far at times, for instance when they insist on making the workplace a community: some people may like it, but some may not. In any case, I believe I would work better in the environment they detail along the pages.

    If you’re still not convinced whether you should read the book, there’s a more classic review here, and thousands of reviews on goodreads.

    Please share your comments or relevant articles on Reddit. I’d like to write a follow-up article with some practical examples, eg a company/project failure or success due to a specific sociological factor.

  2. Andrew Ng about Deep Learning at Paris ML Meetup

    The last Paris Machine Learning meetup #12, hosted at Google Paris, was actually held Europe wide, together with London, Berlin and Zurich ML meetups. Andrew Ng was the guest star, available from San Francisco through Hangout. You can watch the video of the entire meetup, which is available on YouTube, but make yourself comfortable because it’s almost 3h30 long. Here I will only write about Andrew Ng’s talk and the first set of questions he was asked, that is only a bit more than half-an-hour.

    Talk summary

    Andrew Ng talked about deep learning, a subject on which he’s been involved a lot with his teams, and which he describes as “our best shot at progress towards real AI”.

    In the traditional learning framework, there are 3 steps:

    • take an input,
    • design & extract features,
    • feed a learning algorithm.

    The idea behind deep learning is driven by the “one learning algorithm” hypothesis, i.e. the idea that there must be somehow an algorithm that could learn to process almost any type of data. Our brain is capable of very impressive rewiring to reuse areas of the cortex for different kinds of learning, when a sensor is disconnected. There must be something in common in the way our brain learns to process the different channels.

    A way to implement that is with representation learning, that is try to learn the features. Andrew Ng described how sparse coding is a useful way to learn features, not only for images, but also applicable to audio. And you can repeat this to build feature hierarchies.

    After observing that this kind of system worked ever better when increasing the number of features, the idea was to scale up, really. He started working on neural networks with millions of parameters, with the Google Brain project. They made it scale up to 1 billion parameters, and famously made the neural network watch YouTube for a week. It learnt the concepts of faces & cats. They were able to ship new products with the same technology.

    The next question was: how to make it more easily available, i.e. without needing the huge Google infrastucture ? In a word: use GPUs.

    For future work, Andrew Ng explained that deep learning has been used in practical application mostly with supervised learning — to exploit the large amount of available labeled data accompanying the digitization of our society. It is an interesting feature of deep learning algorithms that it keeps getting better and better with more data. He pointed to the fact that there’s been an underinvestment on unsupervised learning. The obvious reason is that it is hard. However, this is probably nearer to how we learn. He gave the example of how we teach a child to recognize a car: you won’t point to him tens of thousands of cars, however loving a parent you are! A few labeled examples are enough. Most of the learning is unsupervised.

    About his own future work, Andrew Ng explained that, after Coursera, he wants to spend more time working on machine learning, GPUs and AI, which is what he’ll be doing at Baidu.


    Andrew Ng then proceeded to answer the questions asked on Google moderator.

    1. “Could you give your top 3 most promising topics in machine learning ?” The first answer was, “unsupervised learning”. Then, about supervised learning, he mentioned the importance of giving tools and making infrastructures easily available to teams, and then listed “speech recognition” and “language embedding”.

    2. “Your introductory course to ML at coursera was really great. Will you teach an advanced ML course at coursera with latest techniques around Convolutional Neural Networks, Deep Learning etc. ?” He’s not sure how he’ll find the time to do it, but he thinks about it and wishes to do so.

    3. “How do you see the job of a Data Scientist in the future?” The increase in digital activities that create data and the rapid drop in computational cost are at the origin of the “big data” trend. As long as these two phenomena continue, the demand for data scientists will grow, and don’t worry, deep learning won’t replace them any time soon. This is an exciting discipline. Data science creates value.

    4. “What are common use-cases where re-sampling (e.g. bootstrapping) is not sufficient for estimating distributions and considering the whole (Big)Data set is a real advantage?” Large neural networks need lots of data. When you have a lot of parameters, with a lot of flexibility in the model, booststrapping doesn’t help. With high VC-dimension, it is simply better to increase the size of the data set.

    5. “What will be the next “killer application” of Deep Learning after visual object recognition and speech recognition?” Thinks that’s speech recognition. Vision & language are still to come.

    6. “How do you see the gap between the research and practical implementation of ML algorithms?” You should minimize this gap. Have researcher for the innovative ideas, but avoid steps between researchers and production implementation. The same person does the research & works with the product team.

    7. “What is it that you find most difficult in machine learning?” Innovation. Innovation requires teamwork. This entails employee development and teaching, to empower people to innovate “like crazy”.

    I’m partial to question n°7 since I’m the one who asked it. I really like his answer, now it’s no wonder why he founded Coursera.


    • Deep learning is not a new fad. The theory behind it is decades old, it’s the result of years of research, and it’s booming due to hardware improvements. It’s not pure chance that it works so well. And there’s still research to do to make it even better.

    • More data & more computational power make for better performance. OK. No intelligence required? It seems it’s not that obvious.

    • Keep in mind that innovation is at the core of a researcher’s work. And that is the most difficult. Choose/educate your team well. Share knowledge.

    • Yann LeCun had a different approach to introducing deep learning for his talk at ESIEE a few days before Andrew Ng. Maybe I’ll write about it later.

    Please share your thoughts about this post and the talk on Reddit.

  3. On survey bias

    How do you bias a survey to get just the result you want?

    Bias the device

    Here’s how SNCF does it. They put some kind of device in the new renovated Saint-Lazare station. The display invites passers-by to tell how much they love the new station (of course, it is beautiful, it’s brand new, full of natural light, and though we suffered a lot with years of works, it was worth the pain). You have two choices : either click on a pink button with a big heart shape (saying, I love it), or take a picture of a QR code “to explain why you don’t love it”.

    The surest way to get more than 90% approval.

    Bias the questionnaire

    But it gets better. After clicking a few times on the button (ok, maybe they handle that kind of childish behavior), I scanned the QR code. There, after waiting for long seconds, I was retargeted to a page saying “3221 people love the new station, and you?” with… a button to say “I love it”, and a smaller one to go to a new page to enter a message. All in a positive tone (help us improve!) So cute.

    I left a message saying something about statistics and fiability. For now, I have just received the standard reply, with a thank you.

    What is it good for?

    From the url linked by the QR code, I found their campaign was using MyFeelBack solutions. So there are some customer feedback professionals making this kind of survey, and there are people making money. Well, I can only hope they take into account the bias introduced in their procedure when they analyze the result. How can they do this?

    • compare the number of clicks with the number of people coming to the station everyday (or, if they have the figure, the number that can pass near the device),
    • add a new device with a random question and estimate how much people want to click on it,
    • count the number of customers from which you can expect feedback with a button vs. a QR code and a form,
    • only take the written feedback into account.

    Maybe the device is not entirely useless, after all? What do you think?

  4. On recruiting in computer programming

    When looking for some specific technical positions, finding the right candidate can become a nightmare. You want someone who can code right, of course. You also want someone with decent knowledge. Conversely, if you’re looking for a job, you have probably noticed it’s quite difficult to convey all the depth and breadth of your knowledge in a classic resume.

    Enter the Internet era: there’s plenty way to show off your skills & knowledge there. Here’s the idea: start a blog, answer questions on stackoverflow, become influent on twitter ; if you write open source code, you can show your Github account. Then give all that information to recruiters. If you’re the recruiter, ask the candidates to show you their Github account.

    James Coglan wrote an interesting blog post on why using Github as a Resume is not a good idea. What he says is you will exclude people that may be competent programmers, but simply don’t have their code on Github :

    we’re creating a filter that means only people with copious leisure time and no other hobbies or commitments will end up in these jobs. People have plenty of valid reasons not to spend their spare time on their job, and certainly most of the great programmers I’ve worked with aren’t big-time GitHubbers.

    Sure. Yet, hiring is difficult because one wants to see the candidate’s coding abilities. Coglan has some good advice there too:

    Fine, but how are we supposed to hire people?

    The hard way. Sorry everyone, but it’s the best we’ve got. People’s problem-solving ability and reasoning can’t be surmised from reading the end result of those processes, you have to talk to them. … If you want to choose wisely, and fairly, stop demanding free work from people.

    Ironically, a few days after reading this, I saw a link to resume.github.io on Twitter. A lot of people were enthusiastically tweeting and linking to it. Obviously there exists a debate about about how to do hiring properly for computer programming positions.

    In the same vein: looking at stackoverflow points. Personnally, I use SO a lot, and often find my answers there. I wish I’d have the time to answer more of them, but often answering a question needs quite a bit of digging, so I just can’t give more than a few hints.

    Anyway, when recruiting, I’m sticking to the “hard way”… test people. For code, that’s point 11 in the Joel Test, by the way.

  5. Why “Miscellany”?

    The idea of a “commonplace” is not new. I’ve taken the title “Miscellany” from Faraday. Here’s an excerpt from a biography of Michael Faraday by Colin A. Russell.

    By a great good fortune, in 1809 he (Michael Faraday) lighted on a book that had just been reprinted […] Its title could not have been more appropriate: The Improvement of the Mind. It was a famous work by a man well known not as a philosopher or scientist but as a writer of hymn. […] Among this book’s recommendations were assiduous reading, attendance at lectures, correspondence with others of similar mind, formation of discussion groups, and the keeping of a “commonplace” book in which to record facts and opinions that might otherwise be forgotten. Within a few weeks the industrious Faraday had begun a commonplace book of his own, formidably entitled The Philosophical Miscellany.

    Let’s try this experiment, and see what we can learn on the way.

Page 1 / 1