Making Smarter A/B Testing Decisions with Event Tracking and Session Replays

A/B tests are theoretically simple but sometimes offer challenges when insufficient data or events are tracked. In e-commerce, conversion rates are often highlighted as a key metric but what causes it and why. By leveraging additional tools that enable event tracking and session replays such as FullStory we can attach context and understand what users are actually doing, allowing us to make data-driven decisions which is crucial in modern business.

Event Tracking: What Happened

When it comes to e-commerce, every click matters, which is why we need to have an event-tracking mechanism. Event-tracking enables tracking user actions across your website, such as adding an item to the cart, hitting a call to action button, or proceeding to checkout. This data is essential in understanding the different behaviors that exist among your A/B test variants.

For example, if you’re testing two versions of a product page, event tracking helps you see:

  • Click Rate: Which of the two gets more clicks on the ‘Add to Cart’ button?
  • Engagement: How long do users spend on the page and how many elements such as images, product descriptions, or reviews do they interact with?
  • Form Submissions: What user feedback method leads to the best uptake for optional benefits?

This approach is beneficial because it allows you to understand what’s happening behind the scenes, beyond just looking at the final sales numbers.

Session Replays: Why it Happened

While event tracking shows what happened, session replays reveal the why. Watching a replay of a customer’s experience (with sensitive data masked) often uncovers behaviors and friction points you, as the developer, didn’t anticipate or encounter during testing. It’s an insight you simply can’t get from final sales numbers, and it’s invaluable when trying to identify behavioral patterns or usability issues.

For example, if event tracking shows a significant drop-off with Variant A users who aren’t reaching the checkout page, session replays might reveal the layout is confusing or that error messages aren’t clear enough.

Here are some of the key benefits:

  • See Drop-Off Points: Identify the exact actions that lead to user drop-offs. Are customers struggling to find important information? Are they hesitating after reading a specific detail or policy that prevents them from moving forward?
  • Spot UX Issues: Observe how users interact with different elements and locate potential friction points. Is adding items to the cart not as intuitive as you thought?
  • Analyse Navigation Patterns: Understand how users move through your site, and see if anything disrupts their journey. Are there too many steps to complete a purchase? How many clicks does it take before users finish their session? Does your marketing align with their needs?

Making Data-Driven Decisions

The key to making smarter decisions from A/B tests is combining event tracking data over a sufficient period with the visual insights gained from session replays. Some changes show clear trends in just a few days, but for more subtle tweaks, you may need weeks or even months to avoid making decisions based on daily fluctuations. Here’s the typical process:

  • Implement Variants: Roll out the new feature to 50% of users while keeping the original version as a baseline. It's best to test one factor at a time, but you can run multiple variants if you’re working with short development cycles, as long as you have a clear understanding of the baseline performance.
  • Gather Data: Track key metrics like clicks and form submissions to gauge how users interact with each variant. In e-commerce, the conversion rate is often the most important data point, but depending on your test, other interactions may be more insightful.
  • Watch Replays: If the data shows significant changes or unexpected differences, watch session replays to uncover the reasons behind user behavior. This helps you reach the root of issues that numbers alone can't explain.
  • Prioritise Changes: Use the data and replays to decide what improvements will have the biggest impact. It could be a simple design tweak, a clearer call-to-action, or even a complete layout overhaul. The key is to be data-driven, avoiding assumptions or personal biases.
  • Iterate Quickly: Roll out small, incremental changes, monitor the results, and keep iterating. A continuous testing and improvement cycle is essential for long-term success.

Wrapping It Up

While conducting an A/B test it's essential to combine event tracking with session replays. Event tracking shows what users are doing, while session replays reveal why they behave that way. Together they form the required knowledge for developing data-driven strategies, addressing user problems, and enhancing the quality of the product. As developers this allows us to move quickly, and most importantly, to align with the business strategy.

Kogan.com Engineering Growth Paths: From Pricing Manager to Data Engineer

Committed to learning and continuous improvement, Kogan.com’s Engineering team develops its engineering talent through giving and taking responsibility, co-creation, mentorship, and internal mobility opportunities to grow and advance their careers. There are opportunities for Engineers at Kogan.com regardless of background. Some engineers at Kogan.com are Individual Contributors, Tech Leads or People Managers – and engineering growth paths and aspirations are supported throughout their journey. Featured here is Reuben Orange, our latest addition to the team who joined us through Kogan.com’s internal mobility program. After a highly successful 10-year journey in the Purchasing team, we were excited to support Reuben's career aspirations and his passion in all things data and software engineering.

With an Educational background in Mathematics and extensive experience across various roles within Purchasing, Reuben brings a unique skill set to his new role.

Collaborating closely with our Data Engineering and Business Intelligence squad,Reueben plays a crucial role in developing, managing, and optimizing infrastructure, tools, and processes important for meeting Kogan.com's analytics and data requirements.

Reuben's wealth of domain experience, coupled with his genuine passion for data and meticulous attention to detail, positions him as an outstanding Data Engineer member to the team. Tell us Reuben….

What initially sparked your interest in transitioning your previous role as pricing manager to data engineering? The pricing manager role was created to develop a “pricing strategy”, I called it “making sure we don't end up on an episode of Hoarders”. We needed to bring Kogan’s inventory level down from our very high post COVID levels, to a more reasonable position, while salvaging as much value as possible. Together, the whole team did that very successfully, and we now get much more value from a dollar invested into inventory than we did before. The role involved a lot of data querying, cleaning and crunching, which I enjoyed. And now as a data engineer, I can do even more of that! While also learning from the amazing DEBI (Data Engineering and Business Intelligence) team.

What were the biggest challenges you faced during your transition, and how did you overcome them? Wrestling with new tools has been a challenge, pushing, pulling, sprinting. I was comfortable living in a spreadsheet, but now I live in the belly of a python script. But I reckon the biggest challenge has been letting go of the old role. Old tasks die hard!

Can you share some specific examples of how skills from your previous role have been valuable in your new role? I am lucky in my old role I gained experience writing SQL and building dashboards, with the help from some great mentors. But a large part of the data engineering role is playing detective, you have to understand the data, and how it’s all connected, to get some value from it. So it has helped having experience as an end user of the data. For example, how are the different objects in the admin panel connected, or what do we mean when using different terms like AGPDI or Gross Sales.

How has your day-to-day work as a data engineer differed from your previous role as a pricing analyst? I would say the biggest change is the Engineering way of working, agile. Every day, we have a stand up to chat about what we’re working on, and what we’re planning to work on, and if anything is blocking us. When completing a piece of work, we always get feedback on it from the rest of the team, everybody knows a lot about what I’m working on, and could easily pick up where I left off. We also have continuous improvement baked in, with fortnightly retrospectives, where we look back at what did and didn’t work.

What advice would you give to others looking to make a similar career transition? I wouldn’t say it’s easy to get into software engineering, it's a lot of work, but I would say it’s very accessible, more so than ever. There is a rich vein of golden knowledge out there on the internet, you just have to mine it.

Deeper Understanding

A look into the potential impact of generative AI tools in the creative industry

Video Killed the Radio Star

The Buggles’ “Video Killed the Radio Star” highlights some concerns regarding the rise of technology within the creative industry. Released back in 1979, the hit ironically proceeded to become MTV’s first music video. The lyrics “rewritten by machine on new technology” still ring true to this day, and it’s an undeniable fact that will continue to persist as long as there’s room to innovate. Forty-five years later, we’re witnessing the dawn of a new way of manifesting an idea. It's naturally causing some fear, but what’s actually there to be scared of?

Computer God

Countless generative AI (GenAI) tools have become available for public consumption over the last few years. ChatGPT racked up over 100 million users just two months after its launch in November 2022. To put things into perspective, it took 9 months for TikTok to get the same amount of active users, while it took Instagram two years to get as many active users. OpenAI has released two other tools - Sora and DALL-E - that admittedly have been equally impressive. In partnership with GitHub, OpenAI also boasts CoPilot, which is favoured by a few people in the team. Midjourney and Google have also produced programs that make use of large language models (LLMs). We’ve undoubtedly entered an AI Boom, and this phenomenon recently inspired an “AI arms race” in Silicon Valley where tech giants shifted their strategies to invest, improve and integrate these tools into their existing software. On paper, these applications increase productivity as they allow a rapid production of ideas, but creative professionals can’t help but feel concerned about the future of their industry with every word that can now be transformed into a solution in just a matter of seconds.

Paranoid Android

In The Futur’s “The Future of AI in the Creative Industry”, Motion's Kevin Lau discusses the impact and the future of the creative industry as these tools continue to rise. Kevin observes that professionals within the industry are both amazed and terrified by generative AI, but he sees it as just history repeating itself. Humans have found ways to integrate change into their current work time and time again. He uses the term “accelerant”, and in content production, cutting the process to sell an idea before actual production starts could be beneficial.

On the topic of job displacement, which arguably is where most of the contention comes from, The Economist’s “How AI is transforming the creative industries” notes that technological disruption is often assumed to lead to job losses, but this anxiety is often overblown. There is a confident assertion that AI is more likely to become a collaborator than a competitor. Marcus du Sotoy, a professor of Mathematics at the University of Oxford, thinks that it’s going to change jobs, and that the potential termination of certain jobs will likely make way for the creation of new ones. He adds that AI could even push humans out of a mechanistic way of thinking and into becoming more creative than ever.

Kevin anticipates that AI is going to replace work that’s on the same level as stock photography. He also worries that younger designers might not be trained with the same fundamentals that older generations were trained on if the tasks, especially the more mundane ones, can be done with just the push of a button. Optimistically, however, he just sees the tools for what they are. Brainstorming, ideation and getting to know the market are still expected to play large parts in the design process. “The tool is just how it’s done,” he maintains. “Design is more fundamental; it’s more about solving the problem of why.”

Will Paterson echoes this sentiment in his video, “Is AI Killing the Graphic Design Industry?” and believes that while the question of replacement is a tricky one to answer, what sets humans apart is their ability to think outside the box. He also brings up an interesting fact that a lot of designers were also initially opposed to the introduction of computers in their process until they learned to adopt them into their day-to-day.

Deep Blue

A new wave of artists emerge despite the panic over job security within the industry. Beatboxer and technologist Harry Yeff uses an AI system to generate percussive noises based on a dataset of his own vocalisations. He explains that, with the aim of producing an interaction between natural and synthetic notes, his “sonic lexicon” expanded and he was able to create streams of something that is both him and what he calls his second self. Holly Herndon excitedly shares compositions that she made with AI models, although she worries about the lack of intellectual property laws that protect this kind of art. While well-established artists like The Beatles have willingly turned to GenAI to polish and complete a long-forgotten demo, there are countless instances where artists have become victims of either seemingly harmless projects or deep fakes that can cause reputational damage in different forms of media.

There have been many discussions around the ethics of GenAI and companies are starting to push for the adoption of its responsible and honest use. Forbes’ “Ethical Considerations for Generative AI” outlines a few things that we need to address before we can effectively benefit from these tools, such as bias and accountability. Around the time of its release, OpenAI made an effort to make ChatGPT “less toxic”. Their move to take a page out of Facebook’s playbook and hire Kenyan workers to bear the brunt of this inspires a conversation around the definition of what is considered ethical, but I digress. The State of Tennessee in the USA enacted the Ensuring Likeness, Voice and Image Security Act of 2024 (the ELVIS Act) in March 2024. Australia, unfortunately, has yet to see an equivalent legislation.

Digital Witness

We’re still in the very early stages of this new kind of technology that is expected to exponentially grow over the next few years, but it’s important to talk about and prioritise the protection of artists whose work can be compromised in a heartbeat. Once the dust settles, I know that we’ll find ways to take art to another dimension with these tools as our partners instead of our replacement. Our collaborator instead of our competitor, as aforementioned. This, after all, is just an iteration of what’s been happening for centuries. We went from paintings to smartphone cameras; musical instruments to GarageBand plugins; and now, text-to-whatever generative AI tools. We can practically climb over the walls that surround creative expression at this point, and it’s only a matter of time before it’s fully taken down.

Sources:

How ChatGPT Managed to Grow Faster Than TikTok or Instagram

The AI Arms Race Is On. Start Worrying.

How AI is Transforming The Creative Industry

How AI is transforming the creative industries

Is Ai Killing the Graphic Design Industry?

How AI is generating a revolution in entertainment

How AI is Transforming The Creative Industry

A new Beatles song is set for release after 45 years - with help from AI

Council Post: Ethical Considerations For Generative AI

The $2 per hour workers who made ChatGPT safer

AI Sound-alikes, Voice Generation and Deepfakes: the ELVIS Act

SwiftUI, a quicker way of doing things

You may have noticed iPhones don't exactly look the same… A lot has changed internally, User Interface (UI) components look far different than they used to.

The code to create these views has evolved, and as a result, so have the UI components themselves. 

Swift, Apple's programming language for creating native iOS apps, has used UIKit as its framework for UI components since 2008. Over time, UIKit has evolved into a robust and flexible framework. 

In 2019, SwiftUI emerged as a new and faster coding framework to use. However, at the time, developers were not quick to jump on board as SwiftUI was simply not ready. There were no online discussions on using SwiftUI when it first came out, there wasn’t as much documentation, and it still needed time for more UI components to come in. 5 years later, this is not quite the case anymore.

As Apple plans for SwiftUI to slowly mature and overtake UIKit as the primary framework for iOS Apps, new native components will only continue to evolve. As a result, UI components in iOS devices and the iOS user experience will evolve too. 

In this blog, we will explore the benefits of making the transition from UIKit to SwiftUI to code views in your project, while also keeping in mind the limitations and pain points you may run into


Benefits of SwiftUI vs UIKit - A quick practical demo 

The benefits SwiftUI offers us largely come from its easy-to-read and code, declarative syntax. Rather than explaining how it's different from UIKit, I think it might be easier to just show you. 

Contacts List App Example

Below I have created two simple apps showing a contacts list, one in UIKit and one in SwiftUI. 

UIKit - Difficult to understand 

As you can see below (figure 3), writing in UIKit can be nuanced and difficult to understand to the untrained eye.

For this app in UIKit, I had to create both a Storyboard (a place where I can drag and drop objects to create a view) and a viewController to control the data. This is two files and 63 lines.

SwiftUI - Easy to understand and intuitive to write

SwiftUI in comparison is far easier to write and read, even to an untrained eye. It only required one file, containing 41 lines. I was able to show the code below to a friend without any coding knowledge, and they were able to understand how the app shown on the right was made through the code on the left. 

There is a List, separated by two sections, each containing the names of either contacts or fav contacts. 

Another benefit shown below is SwiftUI Live Previews. You can get a live interactive preview of the view which will change in real-time with any edits you make. This is better than UIKit where you would have to run your program on a simulator every time to see your changes.

Summary of benefits 

  • The declarative syntax for views is much easier to write and understand

  • Reduces the amount of code you have to write to create views 

  • Removing older views and legacy code 

  • We are keeping up to date with the latest UI components available 

  • SwiftUI Live Previews


Challenges and considerations

Challenge 1: SwiftUI being the Newcomer 

SwiftUI hasn’t been around as long as UIKit so there are still some components that may not be as robust or malleable. UIKit is relatively more stable and has more UI components available than SwiftUI. There is also much more online support [1] and documentation for UIKit.

SwiftUI has now been around for 5 years, which is enough time for good documentation to be put through, and all the initial bugs to be dealt with. Also, I believe it is a fair tradeoff, the majority of components are now available and newer ones with more functionality have been put in

You can think of SwiftUI as an automatic car, and UIKit as a manual car. You have more control over UIKit however it may take quite a bit more learning time to get the hang of. 

SwiftUI being the automatic car handles a lot of nuances that we would have to worry about earlier. In the example used earlier, we used to dequeue cells in tableViews manually (figure 3 - line 51). This is to reuse cells which reduces new cell objects being created, which means we are using less memory and the table has a better performance when scrolling down a huge table. In figure 4 we use List where we don’t have to worry about that at all! A great improvement in SwiftUI is that Lists reuse cells automatically, unlike tableViews in UIKit.

If there are some limitations with SwiftUI that only UIKit can solve, the good news is that we can integrate UIKit and SwiftUI views in our projects alongside each other. This leads us to our next challenge…

Challenge 2: Integrating SwiftUI Views in an existing project

Making this transition isn’t as easy as dumping your project and creating a new one. It is a slow and gradual change where there should be quite a bit of overlap. 

The key to this is to manage your architecture well, to make sure the new code is neatly arranged in new folders or modules, while also marking old code and files that you plan on removing. In our Kogan iOS project, we created several modules, which created a better separation of concerns, improved testability, and was able to make each module more reliable and robust. 

SwiftUI is simply a framework to create views, a lot of the remaining code such as in coordinators to help navigation or viewModels to help with data to feed into views, can still be used from your previous code.

Navigation in a SwiftUI app is a bit different from a UIKit app. In SwiftUI you simply wrap your views in a NavigationView, and can use NavigationLinks to take you to another view (reference [5] explains this wonderfully). However, for those of you who are transitioning from UIKit, this shouldn't concern you. 

In UIKit you have to create navigationControllers to manage which viewControllers are being used and navigated around. If you are transitioning, you can continue using navigationControllers as you do in UIKit, for your SwiftUI views. You simply need to wrap your SwiftUI view in a UIHostingController (see Figure 6). 

Another myth I’ve seen online is that it is difficult to have SwiftUI Views in a predominantly UIKit-led project. This is not true at all, its super easy to do, you can even have fail-safes to return to showing your old UIKit Views. Take a look at the next page!

Navigating from a UIKit view to a SwiftUI view was as easy as the code below.

At Kogan, we were very cautious during our transition. One of the ways we did this was by utilising Feature Flags, so that when turned on, the new SwiftUI view will display, and when turned off, the old UIKit view will display. This can be done in conjunction with Firebase Remote Config, so that you can instantly switch what view users see, instead of having to make a patch release if something goes wrong. 


Conclusion 

SwiftUI is ready for any team pondering the jump. It’s a great opportunity to clean your project. You will slowly be removing legacy code while introducing the latest bits of UI to your App. Once you have made the initial hurdle of replacing your views with SwiftUI, the development will be much faster, and the code will be much easier to understand! 

The key technical issue that you will face when going from UIKit to SwiftUI is using a framework that hasn’t had its time to mature compared to UIKit. I believe that 5 years have been enough, and SwiftUI components can offer us faster coding with how easy it is to declare views but also automatically handle previous pain points such as dequeuing cells in a tableView. 

Another key technical issue is creating a hybrid application while we transition. This can be slow, but it is possible to manage and work around it as we have done here at Kogan, and you will only get faster.


From Database to Domain: Elevating Software Development with DDD Introduction

In the complex landscape of software development, aligning design methodologies with business needs is crucial. Domain-Driven Design (DDD) emerges as a key approach in addressing this alignment, especially in projects characterized by intricate business rules and processes. This methodology stands in contrast to traditional practices, such as embedding business logic within databases, offering a more adaptable and business-focused perspective.

Section 1: Understanding Domain-Driven Design

Definition and Focus DDD is centered around developing software that intricately reflects the business models it aims to serve. It emphasizes a deep understanding of the business domain, ensuring that the software development process is driven by this knowledge, thereby facilitating a common language between developers and business stakeholders.

History and Evolution Pioneered by Eric Evans, DDD has grown from a set of principles into a comprehensive approach, widely recognized for its ability to tackle complex business challenges through software.

Aligning Design with Business Needs The essence of DDD lies in its focus on business-relevant software development, a principle that aligns closely with the need for software to be adaptable and directly linked to business objectives.

Section 2: Core Concepts of Domain-Driven Design

In DDD, concepts like Entities, Value Objects, Aggregates, Domain Events, Repositories, and Bounded Contexts form the foundation of a robust domain model.

Entities and Value Objects: Entities are defined by their identity, playing a crucial role in maintaining business continuity, while Value Objects add depth and integrity to the domain model.

Domain Model vs Database-Level Logic

The decision to embed business logic in the domain model rather than in the database is pivotal. Traditional database-centric approaches can lead to scalability challenges and obscure the business logic from the development team. A domain-centric approach, as proposed by DDD, enhances clarity, flexibility, and testability, a significant shift from database-heavy methodologies.

Impact of Database Logic vs. Domain Model Flexibility

A critical aspect to consider in software architecture is how changes in business logic affect different layers of the application, particularly the database and the user interface (UI). Traditional approaches that embed business logic in the database often lead to a rigid structure where changes in the database logic can cascade up, impacting the UI layer significantly. This rigidity can result in a cumbersome and time-consuming process for implementing changes, especially when business requirements evolve frequently.

In contrast, Domain-Driven Design offers a more flexible approach. By encapsulating business logic within the domain model rather than the database, DDD allows for independent management of how data is formatted and handled at different levels of the application. This separation of concerns means that:

  • Changes at the Database Level: Alterations in the database schema or logic can be absorbed by the domain model without necessarily impacting the UI. The domain model acts as a buffer, allowing for adaptations in the data representation without requiring changes in the user interface.

  • UI Flexibility: The UI can evolve independently of the database structure. The domain model can format and present data in ways that are most suitable for user interaction, irrespective of how that data is stored or processed in the backend.

  • Bi-directional Adaptability: The domain model offers flexibility in both directions – it can adapt to changes in the database while also accommodating different requirements or formats needed by the UI. This adaptability is key in modern applications where user experience is paramount and business requirements are ever-changing.

By adopting a domain-centric approach as advocated by DDD, applications become more resilient to change and more aligned with agile development practices. This flexibility is a significant advantage in today’s fast-paced and user-centric software development environment.

As we have seen, the core concepts of DDD – Entities, Value Objects, Aggregates, Domain Events, Repositories, and Bounded Contexts – are essential in crafting a domain model that is both robust and flexible. This domain-centric approach, emphasizing clarity and adaptability, marks a significant shift from traditional database-heavy methodologies. The impact of this shift is profound, not only in the architecture of the software but also in the way it aligns with and supports business objectives. Now, let’s explore how these theoretical concepts translate into real-world benefits, further underlining the value of DDD in modern software development.

Section 3: Benefits of Implementing DDD

The implementation of Domain-Driven Design goes beyond just shaping a technical framework; it brings several key advantages that enhance both the development process and the final software product. These benefits, stemming directly from the principles and concepts discussed earlier, include enhanced communication across teams, improved software quality, and greater scalability and maintainability. In this section, we will delve into each of these benefits in more detail, showcasing how the foundational principles of DDD contribute to effective and efficient software development.

  • Enhanced Communication: The adoption of a ubiquitous language and a shared understanding of the domain model bridges the gap between technical teams and business stakeholders. This common ground improves collaboration and ensures that business requirements are accurately translated into technical solutions. In practice, this leads to fewer misunderstandings, more efficient development cycles, and solutions that better meet business needs.

  • Improved Software Quality: By focusing deeply on the domain, developers create solutions that are more aligned with the actual business problems they are meant to solve. This alignment results in higher-quality software that is not only functional but also robust and resilient to changing business requirements. Furthermore, the modular nature of DDD allows for more targeted testing and quality assurance, leading to more reliable and maintainable code.

  • Scalability and Maintainability: DDD's emphasis on a well-structured domain model facilitates the creation of software architectures that are easier to scale and maintain over time. This is especially beneficial in complex systems where changes are frequent and scalability is a concern. The clear separation of concerns and bounded contexts within DDD makes it easier to isolate and address specific areas of the system without impacting the whole, thereby enhancing maintainability.

Section 4: Real-World Applications of DDD

DDD has been successfully applied across various industries, from finance to e-commerce, demonstrating its versatility in aligning software solutions with complex business needs.

Rebuilding guardian.co.uk with DDD

Custom House development team applied DDD and used value objects 

Conclusion

Domain-Driven Design represents a sophisticated methodology for aligning software development with business complexities. It offers a stark contrast to traditional database-centric approaches, advocating for a more agile and business-focused development process. DDD not only addresses technical requirements but also closely aligns with business objectives, making it an essential approach in modern software development.

Are you intrigued by the possibilities of Domain-Driven Design? If you're looking to delve deeper into DDD or considering a shift from traditional database-centric models to a more domain-focused approach, we invite you to join the conversation. Embrace the complexities of modern software development by exploring DDD, a methodology that can unlock unprecedented levels of adaptability and alignment with business goals in your projects. Dive into the world of DDD to discover how it can transform your approach to software development and help you navigate the intricate landscape of business needs and technical challenges. Stay informed, stay ahead, and let DDD guide your journey in the evolving world of software innovation.

Additional Resources

Making the case for the WebView

Learning to embrace a hybrid approach for mobile app development:

Native apps are best!

Like the rest of the native mobile app development community, I typically agree with the notion that “native is best” when it comes to mobile apps. After all, these are the technologies we spend tens of hours every week utilising, and there is a passion for user experience that I feel is required in order to happily dive into the deep-end-specialisation of Google or Apple’s tooling.

However...

As one of these oft-opinionated app developers who tends to view non-native tooling like React Native as a sub-par user experience, I have a potentially unpopular idea to share.

Sometimes, a nested WebView does have its place.

The pitch

It might not be popular with the purists, but please hear me out. The nested WebView does sometimes have an important role to play (emphasis on nested!).

When facing the challenge of balancing new features, requested enhancements, required platform changes and general maintenance, maintaining the required degree of parity between your mobile app and its accompanying website can be difficult.

In my experience, here are some key factors to consider when deciding whether to implement a flow natively within your app or whether to utilise a nested WebView: Is it a critical piece of functionality that users will frequently use? To what degree is the content static or interactive? Will it be subject to frequent changes? How much content is there to display? Will the overhead of loading a WebView (including any relevant JavaScript) be fast enough for the use case? Do you feel any technical hurdles of developing it natively will result in a better user experience?

There is no definitive flow chart that can help make this decision for you. However, when balancing time and priorities, it can make sense to utilise the hard work of your fellow web engineers. This frees up your mobile team to focus on nailing the quality of the key parts of your app that users interact with the most.

Hiding the seams

A key part of making the experience great for your users is to make the transition from native flows to WebViews as seamless as possible.

Some things to consider:

  • Nest any web views to keep the user inside your application, keeping the framing and navigation components of your app’s native experience.
  • Where there is navigation, bring the user back to your native experience wherever possible.
  • Maintain the user’s state, such as login details e.g. utilising HTTP cookies.
  • Limit the user’s ability to navigate away from the intended page (e.g. by hiding the web page’s headers/footers).
  • Persist light/dark mode theming if possible.
  • Look to combine native and WebViews within the one screen where appropriate
  • You don’t necessarily need to make the whole screen a WebView to take advantage of what they offer!
  • Leverage a tool like Firebase Remote Config to supply required configurations to your app, so you can update things retroactively if required.

An example of an embedded WebView within the Kogan.com Android application. The user can navigate beyond this page but only where intended.

An example of bringing the user back to your native experience from within a WebView.

We render the product description inside a WebView, embedded within a native XML Android layout. This means we get to utilise native tooling where we can, and outsource frequently-updated content with complex formatting.

An example of an embedded WebView within the Kogan.com Android application. The user can navigate beyond this page but only where intended.

We render the product description inside a WebView, embedded within a native XML Android layout. This means we get to utilise native tooling where we can, and outsource frequently-updated content with complex formatting.

We render the product description inside a WebView, embedded within a native XML Android layout. This means we get to utilise native tooling where we can, and outsource frequently-updated content with complex formatting.

It’s all about balance

Like a lot of things in life, it’s all about balance! What makes mobile apps special are the small details of native layouts, the gestures, the animations, but those take time, and time is precious in a development world of many priorities to juggle.

I encourage anyone who works in the space to consider where and how you can leverage the hard work of your web friends to get relevant functionality in the hands of users with a little less stress, giving your team more time to focus on the most important parts of your app 😀

Decreasing CI Build times up to 50% by caching derived data using github actions.

We had a problem. Our CI pipeline was increasingly becoming a bottleneck in our iOS continuous integration. We here at Kogan like to develop at a fast pace, however we were constantly being held up waiting for builds to complete, leading to a lot of frustration within the team. The rest of the engineering team had switched to using Github Actions(GHA), and with us still using CircleCI, it was time for us to make the change. This was the perfect time for us to re-evaluate how our pipeline was working, to ensure it was the most efficient that it can be. With a build time of over 30 minutes currently, there was a lot of room for improvement.

As we were making a switch, we brainstormed some ideas of ways to improve the overall efficiency of our pipeline, and we kept returning to Derived data. This is how Apple handles build caching within Xcode, but could we use this within our CI Pipeline? Through a bit of investigation, it turns out we weren’t the first to have this thought, and we used this blog post (https://michalzaborowski.medium.com/circleci-60-faster-builds-use-xcode-deriveddata-for-caching-96fb9a58930) as a base for our improvement.

So, where to begin? We first started by replicating our current CI pipeline to GHA, which was pretty smooth other than a few challenges trying to access some of our private repositories. Our build times with this switch had slightly improved, but were still regularly more than 30 minutes to complete. We were already caching the swift packages we use in the project, however there was still plenty of room for improvement.

First we need to ensure that we have fetched the latest changes to the repository, which can be done simply by using the option fetch-depth: 0 on the checkout action in our existing initial step.

  • uses: actions/checkout@v4 with: token: ${{ secrets.GITHUB_TOKEN }} fetch-depth: 0

We then need to cache the derived data. We need to do this in two parts - caching the derived data when a pull request has been successfully merged, and then also restoring the latest derived data cache to the CI pipeline whenever a pull request is opened.

In order to identify the latest develop commit, we use the GHA marketplace action which finds and creates a variable to be used for the latest develop commit SHA.

  • name: Create variable for the nearest develop commit SHA uses: nrwl/nx-set-shas@v3 with: main-branch-name: 'develop'

Then, we need to create a separate pipeline, which will be used to save the derived data whenever a pull request is successfully saved to develop. This will be a very similar flow to our original however the difference will be that we save the cache at the end like the below. This will cache the tmp/derived-data file (which we have set to be the location of derived data in fastlane) to be stored against the latest develop commit SHA.

  • uses: actions/cache/save@v3 name: Save Derived Data Cache with: path: tmp/derived-data key: v1-derived-data-cache-${{ steps.setSHAs.outputs.head }}

Next we need to get the correct cached derived data in our CI pipeline for pull requests. We need to again use the latest develop commit SHA to find the correct derived data cache. We use the restore version of the same action used above in order to find the right cache. This will either find a cache with an exact match, or it will fall back and use the most recent derived data with a partial match.

  • uses: actions/cache/restore@v3 name: Restore Derived Data Cache with: path: tmp/derived-data key: | v1-derived-data-cache-${{ steps.setSHAs.outputs.head }} v1-derived-data-cache-

Similar to the mentioned blog post, GHA will also set the last modified time to be the time that the file was cloned. As Xcode is using this time, we need to update this in order to take advantage of the derived data caching. We managed to find a GHA marketplace action which allowed us to do this.

  • name: Update mtime for incremental builds uses: chetan/git-restore-mtime-action@v2

Last but not least, we need to set the IgnoreFileSystemDeviceInodeChanges=YES in order to ensure Xcode does not consider our cached derived data to be out of date.

  • name: Set IgnoreFileSystemDeviceInodeChanges flag run: defaults write com.apple.dt.XCBuild IgnoreFileSystemDeviceInodeChanges -bool YES

Now that is all complete, we have successfully sped up our CI Pipelines, and decreased our build times by up to 50%. Before we started with CircleCI we were regularly exceeding 30 mins, and after caching derived data and switching to GHA, we got our builds down to roughly 15 mins. This is a massive improvement and has definitely made us developers much happier!

Looking forward we do want to carry on improving our pipeline, and are always looking for ways to keep it up to date, and as fast as it can be. One problem we have encountered using this caching method is that there is no way to clear cache fully and force the build to run without the cached data in case of any build problems other than manually deleting each cache individually. This can be time consuming so we would like to investigate this further and try to find ways to mitigate this.

Project Spotlight: Optimizing ChannelAdvisor Integration: Real-Time Product Catalog Synchronization

Introduction:

In today's e-commerce landscape, seamless integration with third-party platforms is essential for expanding reach and boosting sales. This technical blog post delves into Kogan.com's ChannelAdvisor integration project, offering insights to software engineers on event-driven architectures, infrastructure automation, and efficient CI/CD practices.

Understanding the Challenge:

Integrating with ChannelAdvisor posed a significant hurdle. As a renowned e-commerce platform bridging various marketplaces like eBay and Amazon, ChannelAdvisor promised wider customer reach for Kogan.com. However, maintaining real-time accuracy for stock and pricing data was crucial to prevent out-of-stock purchases and maintain a positive customer experience. The challenge was to ensure timely updates without burdening our production database.

Leveraging BigQuery and Event-Driven Architecture:

To address synchronization challenges, we harnessed Google BigQuery for efficient management of product information. While BigQuery's response time was slightly slower, its capacity to handle extensive catalogs proved fitting. Recognizing that BigQuery's data wasn't frequently updated, we embraced an event-driven architecture through Amazon's EventBridge. This framework generated stock and pricing update events upon changes, which were processed by Lambda functions to update ChannelAdvisor. This solution guaranteed accurate and timely stock and pricing information, enhancing customer experience.

Exploring New Technologies: Terraform and CI/CD Automation:

Within the ChannelAdvisor integration, we ventured beyond our usual tech stack to explore Terraform, an infrastructure provisioning tool. Incorporating Terraform into our UAT and Production environments allowed us to define infrastructure as code, ensuring consistent and reproducible deployments. Integration with GitHub Actions streamlined deployment by triggering Terraform through automated testing and linting checks. Furthermore, Terraform was configured to establish a dedicated test environment for each pull request, facilitating UAT and accelerating iteration cycles. This automation led to faster, reliable deployments and efficient development.

Final Outcome:

The ChannelAdvisor integration unveiled the complexities of synchronizing a vast product catalog in real time. Through Google BigQuery, event-driven architecture via Amazon's EventBridge and Lambda functions, we maintained accuracy in stock and pricing updates. Our exploration of Terraform and CI/CD practices introduced efficiency in infrastructure automation and deployment. Ultimately, this project demonstrates how innovative solutions and streamlined practices optimize e-commerce operations, elevating the customer experience.