Software Testing @Spotify - Julia Oskö, Sofia Höglund
←
→
Page content transcription
If your browser does not render page correctly, please read the page content below
Testing @ Spotify ‣ Introduction ‣ Part 1 Organization Mobile development and release process Mobile application testing Good rules ‣ Part 2 Framework Scripted vs Model Based Testing Demo
Julia Oskö ‣ QA Chapter lead @ Spotify since 2014 ‣ From Oskarshamn ‣ Information Technology @ KTH, Stockholm ‣ Scania, Capgemini, Swedbank, etc.
What is Spotify? Fast Facts “Spotify brings you Started 2006 in Sweden the right music Now available in 58 markets for every Over 15 million paying subscribers moment” Over 30 million available tracks Over 60 million active users
What we wanna do Deliver a fabulous and stable Spotify to all users every time !
Being present everywhere
Organization
How we organize ‣ Founders Martin Lorentzon & Daniel Ek
How we organize Search Playlist Discover Core iOS Browse Player
How we organize Search Playlist Discover Core iOS Browse Player • Platform squads support the rest of the organization • ~15 squads deliver into the clients
How we organize Search Playlist Discover Core iOS Browse Player
How we organize Search Playlist Discover Core iOS Browse Player • Platform squad releases
Mobile development and release process
Deployment pipeline An automated software delivery process. Purpose: getting software from version control into the hands of the users. Every change: • Build software • Run a sequence of test stages • Release Automated Commit Manual acceptance Release stage tes.ng tes.ng
Continuous integration (CI) “ A software development practice where members of a team integrate their work frequently, usually each person integrates at least daily, leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible.” Martin Fowler
Continuous integration (CI) Implementing CI CI environment • Version control of code, • CI software: Team City with ~240 configuration and scripts agents • Scripts to build software • Build server used to build automatically software and execute tests on different platforms. • Tests are run on simulators and devices. • Report test results
Source Control Github Enterprise One repository per Spotify mobile client Shared C++ code across our platforms Example Spotify iOS client • 60 regular contributors • 80-120 merged pull requests per day
Continuous delivery (CD) In theory In practice • Every commit is built and • Our master branch should delivered to the users always be in a shippable state • Employees and beta testers get new builds automatically • Pace of delivery lower than pace of change
Mobile release cycle Release cycle • Release candidate No blocking bugs and all tests green • Every 2 weeks: Sent to our beta users Feature complete -> release branch Only fix blocking bugs on release branch • Release criteria Monitor all quality metrics before releasing 50 000 streams on a release candidate • Release to users App Store/Google Play Store
Quality metrics • Share of users without a crash • Startup time • View loading times • View loading errors • Playback latency • Playback errors Crash-‐free users over .me
Squad priorities 1. Deal with live incidents 2. Deal with blocking bugs on release branch 3. Deal with blocking bugs on master branch 4. Normal feature work
Mobile application testing
“ What I need is a list of specific unknown problems that we’ll encounter ” Dilbert-like quote by Anonymous Manager
A group effort ‣ Squad owns quality ‣ “Why didn’t you test this?” vs. “How can we help you test this?” ‣ QA, Quality Assurance Assistance
Who tests? ‣ Developers ‣ Test Automation (TA) ‣ Quality Assistance (QA) ‣ Employees Employee releases Crashlytics(iOS) / Google Play Store (Android) ‣ Users Beta programs Incremental roll-outs A/B tests
Different roles related to testing QA – Quality Assistance Puts testing first. Quality assistance and advising. Product experts. Be the user. Risk analysts. TA – Test Automator Focus on test automation. Ambassadors of test infrastructure. Implement and maintain test frameworks. QE – Quality Engineer Tool smiths. Develops developer productivity infrastructure. Focus on testability. Push test beyond conventional.
Manual Acceptance Testing What? How? ‣ User perspective ‣ Exploratory testing Does the product make sense to the user? Test design and execution at the same time Gain understanding of how the SUT works ‣ Experience ‣ Test sessions Visual and interactive experience Period of time devoted to fulfilling specific test objectives ‣ Cross-platform ‣ Tours Hard to automate Metaphors of real tourist touring
Challenges ‣ Complex device matrix Different devices, hardware, operating systems, screen sizes, localizations etc. ‣ Many user conditions to test on Memory, CPU, network conditions etc. ‣ Many test scenarios Same functionality in many different contexts and conditions. Example: Connect ‣ Testing slows down release cycle ‣ Crashes and issues reported post-launch Poor ratings and loss in users
How we deal with it ‣ We cannot test everything! ‣ Define a strategy that maximizes our test coverage Based on our user data Test on both simulators/emulators and real devices ‣ Automate what can be automated Prioritize test cases with high value and low cost ‣ Use our users to test Employees Beta users ‣ Know our app Monitoring and crash reporting tools Measure performance
The need of test automation unit test model-based performance test system automation test integration functional acceptance …
Why automate? ‣ Improve product quality Automated regression testing Increase test coverage ‣ Enable developer productivity ‣ Shorten feedback loops ‣ Free up manual testers More time for feature testing, user experience, UI interactions, look and feel, etc. Does the product actual makes sense for the end user? ‣ Key enabler for continuous integration
Mobile Client Test Strategy Unit, integration and Develop manual testing Pre-commit tests, Push before merge < 10 min Build verification tests, Employees CI after merge < 60 min Beta testers Supervised tests, Supervised performance, stress and exploratory testing
Device vs. simulator • Devices are a pain to host and maintain • Functional testing in the Simulator • Performance and non-functional testing on Device Spo.fy -‐ device: 1-‐0
Good rules
Continuous Integration Good rule #1 developers developer commit feedback code Continuous “ B u i l d & t e s t s o f t w a re a t integration continuous build every c h a n g e” publish reports integration source code run tests
Automation Culture Good rule #2 “Automate what can be continuous integration automated”
Visualize test results Good rule #3 Test results “Put the results continuous integration o f yo u r tests Passed Failed w h e re everyone c a n see t h e m”
Questions? ‣ Part 1, questions.
Testing @ Spotify ‣ Introduction ‣ Part 1 Organization Mobile development and release process Mobile application testing Good rules ‣ Part 2 Test Framework Scripted vs Model Based Testing Demo
Sofia Höglund ‣ Test Automation Developer @ Spotify since 2011 ‣ From Kalix ‣ Space Engineering @ LTU, Luleå & Kiruna ‣ Ericsson & Saab
Software Testing Pyramid Manual Cost & Tests Run Time GUI Tests Integration Tests Unit/Component Tests
Software Testing Ice-Cream Cone Cost & Run Time Manual Tests GUI Tests Integration Tests Unit Tests
Mobile Client Test Strategy Unit, integration and Develop manual testing Pre-commit tests, Push before merge < 10 min Build verification tests, Employees CI after merge < 60 min Beta testers Supervised tests, Supervised performance, stress and exploratory testing
Flaky Tests What is “flaky tests”? • Unstable tests – timing issues, logical errors, hard to write good tests • Infrastructure – network, software updates, build framework, unreliable test data • Backend – latency, changes, regional • Flaky feature – WIP, buggy Why is it bad? • Broken window syndrome • No trust in test results
Test Framework Architecture Test model Test reporting tools Model interface Model Non-model Implementation based tests Test Data Service UI Layer Client-side test API Java test API JSON data View Implementation Mobile app
Test Data Service [TDS] TDS provides test data, e.g. test users, artists, albums, etc. Example: @User(flags=Flag.PREMIUM, segment=Segment.LIVE, tags={Tag.FOLLOWERS,Tag.PLAYLISTS}) Test Case – “I need a live premium account that has playlists and followers” TDS – “I get 34 hits and return one of these. The user is locked and cannot be used by other tests.”
Test Result Service [TRS / Kibana]
Section name 52 Scripted Tests @User(flags = Flag.PREMIUM, tags = Tag.PLAYLISTS) ● More deterministic @Test(groups = {TestGroup.RADIO}, timeOut = Waiter.5MIN) public void startRadioTest() { ● Easy to get started with BaseSystemTestAPI systemTest = new BaseSystemTestAPI() { ● Test logic only requires a few lines of @Override public void execute() { code baseActions.openURL(ARTIST_URI); artist.assertLoaded(); ● Faster execution artist.startRadio(); player.assertPlaying(); ● Shorter feedback loop baseActions.openURL(PLAYLIST_URI); playlist.assertLoaded(); playlist.startRadio(); player.assertPlaying(); baseActions.openURL(ALBUM_URI); album.assertLoaded(); album.startRadio(); player.assertPlaying(); } }; executeSystemTest(systemTest, true); }
Section name 53 Model-Based Tests [MBT] ● Test design expressed in models ● Tests generated from models ● Keeps design & implementation separated ● Variation of finite-state diagrams Models describe: ● Flows ● User stories ● Scenarios ● Features ØNo BIG models!
Section name 54 Model-Based Tests [MBT] GraphWalker generates test sequences with various strategies: ● AStar ● ShortestOptimized ● Random ●… … and stop conditions: ● ReachedVertex ● EdgeCoverage ● TimeDuration ●…
Section name 55 Model-Based Tests [MBT] public interface StartRadio { public void e_Init(); public void v_InitialView(); public void e_GoToArtist(); public void v_Artist(); public void e_StartArtistRadio(); public void v_ArtistRadio(); public void e_GoToPlaylist(); public void v_Playlist(); public void e_StartPlaylistRadio(); public void v_PlaylistRadio(); public void e_GoToAlbum(); public void v_Album(); public void e_StartAlbumRadio(); public void v_AlbumRadio(); public void e_Pause(); public void v_TrackNotPlaying(); }
Section name 56 Model-Based Tests [MBT]
Section name 57 Model-Based vs Scripted e_Init(); baseActions.openURL(ARTIST_URI); v_InitialView(); artist.assertLoaded(); e_GoToArtist(); artist.startRadio(); v_Artist(); player.assertPlaying(); e_StartArtistRadio(); v_ArtistRadio(); baseActions.openURL(PLAYLIST_URI); e_Pause(); playlist.assertLoaded(); v_TrackNotPlaying(); playlist.startRadio(); v_InitialView(); player.assertPlaying(); e_GoToAlbum(); v_Album(); baseActions.openURL(ALBUM_URI); e_StartAlbumRadio(); album.assertLoaded(); v_AlbumRadio(); album.startRadio(); e_Pause(); player.assertPlaying(); v_TrackNotPlaying(); v_InitialView(); e_GoToArtist(); v_Artist(); e_StartArtistRadio(); v_ArtistRadio(); e_Pause(); v_TrackNotPlaying(); v_InitialView(); e_GoToPlaylist(); v_Playlist(); e_StartPlaylistRadio(); v_PlaylistRadio(); …
Section name 58 Model-Based vs Scripted Model-‐Based Scripted Easy to get started Determinis.c Fast execu.on Random Powerful…? One IDE
Mobile Client Test Strategy Unit, integration and Develop manual testing Pre-commit tests, Push before merge < 10 min Build verification tests, Employees CI after merge < 60 min Beta testers Supervised tests, Supervised performance, stress and exploratory testing
DEMO
Questions?
Thanks! Want to join the band? http://www.spotify.com/jobs
You can also read