It doesn't matter if you have "big data" or "small data" if you need to import and process it in near realtime you want to have a system that is robust and maintainable. This is where the failure tolerance and scalability of Erlang/OTP, the expressiveness of Elixir, and the flexibility of Flow and GenStage are all great assets. This is a story of how we built a data pipeline to move and process billions of rows from MySQL and CSV files into Redshift and what we learned along the way.