Bloggo back to the blog
SpeakEasy at EuroSTAR 2016 – The Winning Proposal-->
In it’s second year working with the SpeakEasy initiative, EuroSTAR is delighted to announce that the EuroSTAR committee have reached a decision on the best proposal. The winning author will have the opportunity to present at EuroSTAR 2016.
After a period of reading the short-listed proposals, debate and review the EuroSTAR Conference committee has selected Mirjana Kolarov as the winner. Mirjana was mentored by Maaret Pyhäjärvi.
SpeakEasy was founded by Anne-Marie Charrett and Fiona Charles in 2014 with the goal of promoting novice speakers at major tech conferences. The initiative has a focus on software testing conferences. The programme pair potential speakers with a mentor where they receive guidance and advice to achieve the goal of speaking at a major Conference.
EuroSTAR and SpeakEasy joined forces again this year to make a place available to one such potential speaker.
The Successful Applicant
Mirjana Kolarov is a Test Department Manager at Levi9 IT Services. She is passionate and highly motivated software tester with an M.Sc.C.E. degree. For more than 7 years, Mirjana is practicing software testing on variety of projects. She loves getting her hands dirty with actual testing, and leads by example, promoting appropriate and deliberates testing skills and techniques.
Mirjana is a co-founder of Test’RS Club – the first testing community in Serbia. The reason for starting this community can be found in her desire to have the platform to learn, exchange ideas and experiences and just talk to fellow testers. Recently she found her passion to speak at international conferences and try to pass her experience to the others.
The Successful proposal
Measuring Performance – Measure Twice and Cut Once
A main performance measurement in projects is response time. When measuring, we come up with an average response time. This talk goes through my experience of why measuring the average is wrong one to pay attention to, and how you should pay attention to the maximum response time. While we manage noise in measurement by adding sample size, the maximum response time is not the noise, it’s the core of our measurement.
In this talk, we look at lessons from a project with an SLA to a small number of milliseconds 99.99%. I’ll show you how we learned that tools measure incorrectly and unreliably for high percentages, how we prune the data incorrectly (coordinated omission problem), how the measurement must be combined with functionalities as there are differences of importance within the application, how the response times shouldn’t be observed in isolation and what the open source toolbox we came up with to manage these problems looked like.
We changed now we measure and what we do with the results. Join me for thinking critically “Are my results real, or are they lying to me in my face for years?” and learn how to do better performance testing.
– learn how to measure response times in a meaningful way with a practical toolset
– avoid the measurement mistakes we stumbled across for a high-reliability figure
– think critically about any measurements you’re collecting about your application