How InfluxDB Enables IoT Sensor Monitoring of Aquariums

Blog Detail

This blog post was originally published on the InfluxData blog on September, 8 2020 and can be found here. Interested in learning more? Reach out to us today to get the conversation started.

I recently spoke with Jeremy White who is using InfluxDB to monitor his aquariums. By collecting IoT sensor data, he has been able to better understand his 200 gallon salt-water aquarium full of fish and coral. The entire project can be found on GitHub.

Caitlin: Tell us about yourself and your career.

Jeremy: I’m a Senior Network Automation Consultant at Network to Code, and my background is in networking engineering. Network to Code is an industry leader in network automation. I taught myself Python and Ansible, and I have built a full network automation framework. In addition to Python and Ansible, I’m familiar with Django REST frameworks, Flask and NetBox. I’m starting to dive into telemetry and analytics.

Caitlin: How did you learn about InfluxDB?

Jeremy: I have previously used InfluxDB at work. Some colleagues have used InfluxDB and Telegraf previously to monitor public DNS. They have also used InfluxDB, the purpose-built time series database, to monitor their home networks. Network to Code has implemented InfluxDB for various clients based on their needs. I was impressed with InfluxDB and thought it might be a great way to improve the monitoring of my saltwater aquarium.

Caitlin: Tell us about your aquariums.

Jeremy: I currently have two salt-water aquariums. One is 54 gallons and my new one is 200 gallons. I recently decided to upgrade to the 200 gallon aquarium. It’s definitely going to be a process moving all of my fish and coral to their new home. I need to ensure everything is stable and that it’s working the way I want it to.

I’m growing small polyp stony (SPS) coral. I have Acropora, Monitpora, Catalaphyllia, Lithophyllon, Discosoma, Zoanthus and Briareum corals. Many of these corals are found closer to the surface within coral reefs. They have a calcium-based skeleton with small polyps. These corals can be very vibrant and beautiful. I have a Staghorn coral which is a beautiful highlighter teal-bluish color.

corals_00

Close-up of coral growing in aquarium

Most of my coral is either aquaculture or mariculture. Aquaculture coral is grown in an aquarium or tank with artificial lighting. Mariculture coral is coral cultured in specifically designated farmed areas in the ocean. This means they are not pulling coral from native reefs. I’ve tried to make sure I’m as sustainable as possible. Most of the coral I have originated from Indonesia or Australia. I do have some coral fragments that are from someone who grew them in captivity for over 20 years.

Right now, I have a Coral Beauty Dwarf Angelfish, a Blue Hippo Tang, a Yellow Tang, a Yellow Watchman Goby, three Gladiator Clownfish, an Ocellaris Clownfish, a Copperband Butterflyfish and a Pixie Hawkfish. I also have a bunch of invertebrates including about 20 hermit crabs, roughly 20 snails, a Banded Serpent starfish, a Sand Sifting starfish, an Arrow crab and an Emerald crab.

corals_01

200-gallon saltwater aquarium monitored by InfluxDB

Caitlin: What were some of the challenges you were facing with your aquariums?

Jeremy: I knew I had proper lighting and proper water flow. However, I knew my corals weren’t growing at the rate that I thought I should be observing. There was minimal calcification on the SPS coral. They were alive and surviving, but they weren’t thriving. In addition to lack of growth, the coloration was off. Proper lighting provides the necessary energy for photosynthetic organisms like plants, animals, anemones and coral to survive. Lighting can also impact fish behavior and physiology.

The next aspect is water chemistry: I knew realistically that I was probably only going to check the status of the aquarium once a week. Having a bunch of individual tests that I’d have to run manually wasn’t going to happen as often as I’d like. I knew I needed to automate my monitoring solution to ensure I had the most recent accurate data about my aquarium.

It turns out my aquarium environment wasn’t as stable as I thought. Coral can survive in a wide range of water temperatures, and the pH of the water can vary. It’s more important that the levels stay consistent. Coral are very flexible creatures, so they are able to adapt and survive. Frequent fluctuations in their environment, like temperatures and salinity, can be very stressful for coral, and detrimental to their survival and growth.

Within three days of setting everything (my AquaPy controller) up, I started seeing results. I realized there is a two-degree temperature swing from day to night. As I work from home, I know that it isn’t because my house is getting too hot, as the temperature in my home is pretty consistent. A two-degree swing is pretty minimal, but it’s enough to impact the growth and color of my coral. After iterating with my setup (AquaPy), I got the temperature delta down to less than one degree. My tanks hover around the 79-81ºF mark. I want to minimize the difference as much as possible.

Caitlin: Tell us about the IoT monitoring solution you built using InfluxDB.

Jeremy: My whole stack is built using containers. I love containers! Whether it’s a work or personal project, If I can containerize it, I do. If there isn’t a prebuilt container, I’ll create one myself. I started off by purchasing sensors from Atlas Scientific. They make the IoT sensors and the small printed circuit board (PCB’s). The PCBs are used to read the data from the sensors using the I2C protocol within Raspberry Pi’s. There’s a company called Whitebox which makes a product called Tentacle T3 for Raspberry Pi which helps make the whole setup more plug-and-play. I use Django to configure the sensors. Along with the Django admin portal, I’m using a django_rq worker, a Redis worker, to listen for the jobs as they come in. I’m using a Django Redis scheduler which is running crons and scheduling known jobs at its cron intervals. Right now, it’s scheduled for every minute. Every 60 seconds is the lowest interval you can set with the RQ scheduler. The RQ scheduler is putting the job into Redis. Next, the RQ worker is actively listening to the Redis queue for the job. The RQ worker is communicating with a Postgres database to pull details needed about the job to allow it to execute and to collect the sensor data. I have sensors pulling data on: water temperature, water salinity, water level and pH levels. My ideal pH level is 8.3. There is a bit of range due to carbon dioxide levels in the air and C02 created by the fish. On any given day, my tank pH ranges between 7.95-8.19.

corals_02

Aquarium IoT monitoring solution architecture diagram

Once collected, the data is stored in InfluxDB. After collecting and storing the telemetry data a new event job is added to the Redis queue for an rq worker to evaluate and action on the telemetry data accordingly.

I also have purchased home automation tools from WeMo, which is owned by Belkin. They’re pretty cool because they can be controlled within your home network using multicasts. Adding the WeMo switches to the stack gave me the ability to turn devices on and off based on the telemetry data collected. An example is when a high temperature threshold is met. The rq_worker pulls the event from the Redis queue and based on the event definition it knows exactly which Wemo device to call based on its MAC address. The rq_worker then sends a multicast message to the switch to toggle the power on or off and reports back the results to the Redis job status. I also have another set of automation that is not directly integrated with the AquaPy, for instance I have an auto top off set up. It’s an optical sensor used to detect water level. If the water level drops too low, fresh water will automatically be added to the tank. Of course, adding too much fresh water the salinity would fall more than is acceptable. Simply adding an extra gallon of fresh water can stress the coral, and I could lose a colony. On the flip side, if there’s too much evaporation, the salinity level could become too high. If the water’s pH is skyrocketing, this could mean my doser is failing; the doser is responsible for adjusting the calcium and alkalinity. If the doser switch is stuck in the ON position, it will start dumping unnecessary chemicals into the tank. All of these factors can offset the balance of the tank.

Caitlin: After implementing InfluxDB, what did you learn about your aquariums?

Jeremy: Thanks to InfluxDB, I was able to set thresholds for temperature and other key metrics. If the water temperature rises above a certain level, I have a fan set up to automatically turn on. By automatically triggering the fan on, I’ve had less evaporation and the tanks have cooled down. As soon as the temperature has sufficiently dropped below the recovery threshold, the fan turns off automatically.

By monitoring my tank continuously, I know when something is amiss. A simple power outage can have a snowball effect on the health of my coral and tanks. Without power, the lights don’t work, which means the natural algae can’t convert the carbon dioxide back into oxygen. Even if the power is out for three hours, it could mean some of my coral could die. This is especially important if I’m introducing new pieces of coral to the tanks as they haven’t acclimatized to their new home. I now have a UPS as a battery backup. It does help being a network engineer! I now have enterprise-grade network equipment set up at home. Having spent a lot of time and money into my saltwater aquarium, I want to make sure I don’t lose anything. My internet connection, PoE network switch, router, firewall, etc. are all on separate UPSes. I recently moved, and while I don’t experience many power outages, there are still occasional brownouts. I’m using Grafana to visualize and graph all of my data. Originally, I was using one of Grafana’s plugins to send me Slack alerts. Thanks to Slackbots, if I get a notification on my phone, I know to check it for an update.

corals_03

Grafana dashboard displaying aquarium sensor data

Caitlin: What are your future plans with your aquariums and InfluxDB?

Jeremy: As I run my tanks with higher levels of calcium and alkalinity, I want to create some form of controlled studies around home aquariums. I’d like to be able to demonstrate the benefits of running tanks at specific levels. Faster calcification of hard corals hasn’t been proven with controlled studies. There’s a company called Bulk Reef Supply who’s also working on short lived experiments. They are running different systems at various levels with the same coral for a few weeks to months and reporting results. Once I get more data into InfluxDB, I’d like to start correlating my data. By having more time-stamped data, I’d like to set a baseline and determine a percentage deviation from baseline. I’d like to create these for all systems. As for right now, I’m not collecting metrics on threshold actions. These include when the fan turns on or off, time of day the dosing pump turns on, etc. My new aquarium lights are controlled via Bluetooth. In addition to adding all of this data into InfluxDB, I’d also like to incorporate the seasonal daylight times from Indonesia and northeast Australia better. As more of my coral is from there, I’d like to mimic the natural daylight cycle. Long-term goals including helping the scientific community by helping improve the natural coral reefs. By improving the world’s growing coral in captivity practices, hopefully we can stop needing to go back to the natural reefs for coral. It would be amazing if we could aquaculture enough coral to give back to the natural coral reefs that we’re destroying as a society.

Here is a list of all of the parts I’m using:

Do you have questions for Jeremy? Join InfluxDB on September 23, 2020 for our virtual Time Series Meetup as he demonstrates how to use InfluxDB and Grafana to monitor your aquarium!

RSVP today.



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

Open Source Lessons Learned

Blog Detail

In February of 2017 I had posed the questions below and received the respective responses from a few veterans in the open source [network automation] community. Though it has been a few years, and I am certain that thoughts and retrospectives have changed since, I think this serves as a good point in time capture of the the author’s thoughts.

A few of the industry best were nice enough to respond.


Original QuestionI realized that most channels on here (referring to Network to Code Slack) are either vendor-focused or for a given open source project. Of the latter section, most of these open source projects were started by a people at a company that gave back their work to the community. I wondering if you all would indulge me a bit in commenting on any of the following:


How much time does your employer (or former employer) allow to build project? What is the break down of personal vs “on the clock” time spent?

Matt Oswalt The employer I was working for at the time had a pretty aggressive IP clause in the employment agreement, so I went through legal to get approval first. But other than that, I had an understanding with my boss that I’m a big boy and that I’d take care of my shit. I know not everyone gets that arrangement, but it’s how it worked out for me. Same goes for my current employer – even more so actually because now my day job is open source, so it’s by no means discouraged, again as long as I get my work done. There’s not really a “clock” as long as deliverables are met. (with the exception of scheduled meetings, etc)

Jeremy Stretch NetBox is one of my primary ongoing projects. While my workload varies quite a bit day-to-day, a typical week includes at least a day or two’s worth of time spent on NetBox development.

Jathan McCollum Network Source of Truth (NSoT) is a core product of NetEng at Dropbox and so I am able to spend as much time as necessary working at in on the clock. I’d say t’s about 80% on, 20% off the clock.

David Barroso Hard to tell. I have been lucky enough to work most of the time based on objectives so as long as the job is done nobody really cares what/when/where I do things. In any case, it takes a lot of personal time to manage a project like napalm.

Recap: All of these projects were at least in someway sponsored by a parent company, either formally or informally. These are by in large forward thinking companies that realize the value open source, that the feedback and code provided back far exceed any potential competitor advantage.


Were there any hurdles to open source the project? Was this decision company driven, self driven, other?

Matt Oswalt Again, in my case, I felt compelled to be explicit with my legal department for my employer at the time. I probably could have avoided that and just pushed it, but in my eyes it was simple enough to make my case, and I had the support of my manager, who wanted me to get more involved in open source anyways. So it was a self-imposed hurdle for the most part

Jeremy Stretch No hurdles at all. I’m very fortunate that DigitalOcean is a strong supporter of open source, and was encouraged at all levels to make NetBox public.

Jathan McCollum We started the project as open source from the get go, which greatly simplified things. There wasn’t any legacy cruft or any internal implementation details to factor out. It helps that Dropbox has a very pro-open-source attitude, so starting new projects openly is encouraged.

David Barroso It was mostly driven by me. The hardest part is certainly managing the project; making sure everybody involved goes in the same direction, that issues are handled properly, PRs reviewed, new versions released… it’s hard and requires a lot of work (DM me if you want to be a PM for napalm xD).

Recap: Being self driven or taking an open source first mentality is helpful. Going through the legal process is often not well understood on either side, and making this an outside in approach is not a bad approach.


How did word spread about your project?

Matt Oswalt Tbh, I don’t know of one particular way that word spread….I know there was a pretty popular reddit thread a while back, and I’ve done episodes for Ivan and Packet Pushers, so that’s probably been the biggest avenue. One thing that surprised me was that people actually brought it up in my latest job search – in fact, one entire interview was “explain to me how ToDD works”. No joke!

Jeremy Stretch I announced the open source release on my personal blog, which provided the initial publicity. Word spread pretty quickly, and hosting it in DO’s already-popular GitHub account surely helped.

Jathan McCollum Mostly through the NTC community, GitHub, and IRC. @jedelman8 also invited me to speak last year about NSoT at Interop in Las Vegas. I’m sure that helped!

David Barroso Talks, slack, podcasts, social media, word of mouth… Any free form of communications :stuck_out_tongue:

Recap: A mix of mostly self promotion and good ideas got these projects moving. A healthy dose of leveraging external platforms such as Slack, talks, and podcasts is certainly helpful. I also think one thing not captured here is the maintenance on all of these projects is high. It is really not an easy task to keep up with open sourcing a project, there are many barriers to entry most of all is time.


Did open sourcing make it easier or harder to maintain? What is the best and worst part about the larger community?

Matt Oswalt Easier. Getting folks like @vcabbage on board that have more Go experience than I do was super helpful.

Jeremy Stretch It’s definitely harder to maintain as an open source project, simply because I can’t control the pace at which issues are opened. Sometimes the influx of requests and bugs is difficult to keep up with, but overall the feedback is invaluable.

Jathan McCollum Open sourcing an existing project will definitely make it harder to maintain at first, primarily because of the internal reviews and refactoring of internal implementation details into configurable settings or plugins.

I already have experience maintaining projects in the open because of my other project, Trigger. Since NSoT started as an open source project, it was much easier to frontload any issues of internal settings to matters of configuration settings within the project.

I feel that in the long-run open source projects become much stronger and easier to use than internal projects because of the requirement to always consider how others who don’t have access to your internal services might utilize the project. The best part about larger community: Having people you might not ever meet or work with help you solve complex problems you just haven’t had time to get to, can’t reasonably prioritize yourself because of business requirements, or catching typos in docs you overlooked. The self-selection of the technical community around the projects is inspiring.

The worst part about larger community: Bad contributions. People who don’t know how to read docs, aren’t native English speakers, who don’t understand things like code style and unit testing… They can be a drain and a time sink and you sometimes have to make the hard call to say “No I can’t help you” or “No, I will not accept your pull request”.

David Barroso Certainly harder. You have to take into account things you might not otherwise. For example, documentation is harder as you can’t assume knowledge or expertise, code has to be more generic and avoid business logic… It certainly adds a lot of overhead. The best of working with the larger community is certainly the community itself. I never imagined how “big” (by some definition of big) napalm was going to become and how many people would get involved and get passionate about it. It’s great. The worst, having to manage the project :stuck_out_tongue:

Recap: A mix bag of answers here, which is somewhat to be expected. There is inherently a pro & con to open sourcing a project. One through line is documentation, you truly have to consider a much different audience for this to work correctly. Also contributions are simply not that cut and dry.


If you “had” to do it all over again, what would do different? e.g. different language, design more upfront, use data-driven model, less feature, more features, etc…

Matt Oswalt I built ToDD to force myself into learning Go primarily – my goal was never to have any sort of productized or even remotely popular project. However, the idea held water after a bit, so I guess the only thing I regret is that I didn’t try a simpler project first, so I could ensure things were as idiomatic as possible. That’s sorting itself out as I grow, and as folks like @vcabbage contribute, but still there’s a lot of tech debt. Also, and this is because of the same reason, I regret not following TDD. I’m used to Python where I can just mock everything and anything – and in languages like Go, you have to design things to be testable. So I wish I had started down this path earlier, much earlier

Jeremy Stretch I would have stripped out the undocumented (and pretty awful) RPC functionality before releasing the project. And maybe invest a bit more time into the installation documentation.

Jathan McCollum – Should have implemented Python3 support from the get-go. It’s still Python2 only. :disappointed:

  • Should have decoupled the web UI from the backend, because I’m not that good at fron-end dev & it’s lagging in feature parity. The GUI is still an important part of NSoT, I just don’t think it makes sense for it to be a core part anymore.
  • I would have started using Django from the get-go. We decided to build the app around Tornado at first, because Dropbox uses it. Not long after the project kickoff this turned out to be a bad decision and cost a LOT of time in refactoring everything around Django.

David Barroso I don’t think I would do anything different. Not because the project was perfectly done from the beginning (far from the truth) but because every mistake had positive consequences. Either it meant assumptions/simplications that allowed the project to grow faster or it meant learning valuable lessons for the future. Improving/refactoring/deprecating code is part of the process.

Recap: Another mix bag of answers here. The primary through line is, you can’t get it right first and you should take an iterative approach. This is likely true of any programming project, and echoed throughout much of the available literature on the topic.


Similarly, any lessons learned to pass on?

Matt Oswalt This is a lesson I’ve not only learned in my OSS projects (side and dayjob) but also in previous code related jobs. Stop worry about what your code looks like. Everyone wants to just push a single big commit, only when it’s perfect. Please don’t do that – if you have code you want to work on, work on it in the open. Make at least one commit a day, even if it’s shit (and push it to github!). No one that cares to read your code is going to call you dumb…..they’re going to help you. And you’ll naturally insist on higher quality just knowing that it’s going to be public in a few minutes. If you’re working on an idea, do it in a “work in progress” PR….we do this all the time on the stackstorm team, when we’re working on a PR that we’re not quite done with. That way, people can see where we’re at and let us know if we’re about to go off a cliff.

Jeremy Stretch Decide on and establish your primary medium of communication before the public release. I had set up a subreddit initially, then pivoted to a mailing list, which has caused some confusion.

Jathan McCollum Don’t be afraid to share your code! I’ve seen many network people afraid of the scrutiny of their code being on GitHub because “they aren’t coders”. Don’t let that hold you back. The majority of people in open source genuinely want to help and spend their free time doing this FOR FREE. Never forget that. I want to see your code because it shows me how you think, and to a more selfish degree: I want you to help me make my projects better! So never forget that either: There is no selfless act, especially in the open source community.

David Barroso This is no different from any other project you have already worked on. People with different levels of expertise might jump in and people will have opinions. Don’t take things personal and try to learn and have fun.

Recap: Do not suffer from imposter syndrome! I have heard Matt speak about this a few times, and I think he does a good job recapping here. It is certainly vulnerable to put yourself out there, but the pros far exceed the cons here.


Conclusion

Overall, great feedback from a great community. As noted, the information is already dated, but that is not to say it’s still not as relevant as ever. In fact, a lot has changed–both Jathan and Jeremy are now part of the Network to Code team!

If you want to chat to these industry vets and any one else in the network automation community, be sure to checkout the Network to Code Slack Workspace that has a self-sign page: slack.networktocode.com.

A big thanks to Matt, Jeremy, Jathan, and David for allowing us to post this.

-Ken



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!