Bloggo back to the blog
Backstage at EuroSTAR 2016 Programme Planning-->
I’m on the EuroSTAR 2016 Programme Committee. Until recently, we were having closed-door meetings to determine what presentations will be accepted.
Now that the programme has been announced, I can finally share what happened.
The process reminded me of a software project – smart people using their wisdom and experience as they wrangle under time pressure to create a product with implicit and explicit requirements.
We used BaseCamp to communicate status on tasks like reaching out to colleagues to see if they could do a keynote and following up on submissions, but the real work was In January, when we started reviewing incoming submissions. By the end of February, we had each worked separately to review 540 submissions. That’s right… 540 — the largest amount of submissions in EuroSTAR history — and each of the 4 of us on the committee looked at every single one. In addition to the committee, an independent volunteer panel of approximately 50 people scored the submission (each one got to review up to 30 anonymized submissions).
Using a simple web interface, we opened each submission and graded it from 1 to 5 in the following areas:
- How engaging is the topic?
- How fresh is the idea?
- How appropriate was the scope?
- How relevant is it to the theme? (“Testing to Learn, Learning to Test”)
- What’s the overall feeling? Excited to see this topic in action?
The highest possible score was 25.
Out of the submissions I reviewed, I granted perfect scores to only about 5%.
For me, the theme of each submission was critical. I wanted every submission to really come through and integrate “Testing to Learn, Learning to Test”.
Finalizing the Program (Ireland, early March)
There were six of us who met in person in Galway, Ireland.
See Ruud Cox on the left? He’s making an appeal to Siobhan Hunt on his right about an orange Sticky Note. It’s was Jay Sethi’s submission titled “What happened when we Switched our Data Center off?”
Here you see Program Chair Shmuel Gershon closest to the board, ready to move it to the next track because it may be a better fit there. Lorraine Banks looks straight on, ready to move a yellow Sticky Note in its place. Oh, and Paul Madden on the right? He’s holding a web cam in his left hand so fellow program committee member Anna Hoff can see what’s going on as she participates from her home in Sweden.
One of the key pieces of equipment was the projector so we could all see the submissions in the spreadsheet.
This was our view from the room at the Meyrick Hotel in Galway. You’re looking west, and if you squint your eyes, you may just be able to see to my house in the United States about 5000 miles away.
As we assembled in Galway, we basically locked ourselves in a small hotel conference room for two days with the sole propose to choose the speakers.
Roles & specialties
No one had a problem with bringing their skill and specialty, and being known for that role.
- Technical Support: Paul
- Conference Subject Matter Expert, Sticky Note writer: Lorraine
- Conference Subject Matter Expert, Record-Keeper: Siobhan
- Leader / “ScrumMaster”: Shmuel
- Jurists: Ruud, Anna, and I
While most development projects have a code repository, we had a master Excel spreadsheet. Each submission was a row with the person’s name, topic, and score.
There were times we needed more detail to make a decision. In some cases we went to supporting websites that the submitter had included in their submission form. Sometimes it was a YouTube link, other times it was the page from the last conference at which they spoke.
Acceptance wasn’t just based on the scores we gave, but a surprising amount of implicit “specifications” that emerged in our judging, such as…
- What submissions got the best overall average scores from all reviewers?
- What submissions got the best score from just the 4 main reviewers?
- What did the other reviewers say?
- For which talk format is this submission most appropriate: keynote, track, workshop
- Do we have a good balance of new speakers and experienced speakers?
- Do we know the submitter? Have we seen them speak?
- What is the submitter’s reputation?
- How interesting or novel is the topic?
- If it’s a popular topic, is it a topic that’s good for our theme?
- What kind of topics brought in the crowds last year?
- What talks got good grades from the attendees last year?
- What kinds of grades did the submitter say they received for this topic when they presented it at other conferences?
- Did the submitter submit multiple topics? If so, were they just spamming us or did they have merit?
- Can the speaker handle a room of 100 people?
- From their writing or video, do we get a sense their story can be understood?
Post-selection “Gap” check
When we had the program fleshed out on the board with Sticky Notes covering all of the available spots on the outline, it was time for another set of tests — call it an end-of-sprint demo or an integration test suite – it was a matter looking at the whole program, we considered the following to make sure we were not inadvertently biased:
- Are there topic tracks that other interational conferences (e.g. STAREast/STARWest) usually have that we should have?
- What popular track topics have we not selected (for example: Internet of Things, Test Techniques)
- What were the track topics last year?
- Do we have a good balance of male and female speakers?
- Do we have a good balance of experienced or new speakers?
- Do we have a good balance between software testing community and schools?
- Was there any submission high on your individual list that did not get in?
- Do we have too many speakers from one company?
- Do we have a good balance of speakers from different countries?
- Is there a submitter who is normally very good, but we have not picked them for some reason?
- Is there someone who has not submitted at all, but they could be a worthy honored invitee?
- Have we inadvertently picked a multi-submitter to do 2, or even 3 talks without realizing it?
- What good speakers might we have inadvertently not chosen?
- Do we have a good set of topics for our participant demographics?
- Testing for five years
- Testing for 10 years
- “Certified and loving it”
- “Context-driven dude”
Top 3 Lessons
- It’s not the end of the world if people on a project interpret project goals in different ways. Ultimately, it’s rapid, frequent communication that mitigates the risk of any misunderstandings so glaring that it puts the project at risk. Sometimes a misunderstanding provokes a conversation that can bring to light a serious omission.
- Productivity really flowed when there were no other distractions.
We all committed to the getting the program built in those few days in Galway. Despite being remote from Sweden, Anna gave us her undivided attention, which was impressive, given that she had a four-month-old to take care of. The supporting conference members like Siobhan, Lorraine, and Paul made it easy for us to make progress because of their experience of running this conference in years past and their exceeding hospitality to make the work easier.
- Requirements don’t have to be defined in advance for the project to be successful. The list of 15 program criteria above wasn’t planned in advance. It was an emerging list of considerations and conversations I noticed we were having in Galway as we built the program. It came from everyone having a different perspective. The conversations weren’t scripted or rehearsed – it was just a group of smart people applying their wisdom and experience when thinking of what would make a great program.
But nevermind all of this… the real test is now – from you. You will ultimately decide if all of our deliberations (and testing) has made a great programme.