New SparkFun Data Service

We've launched a bit bucket for all your data.

Favorited Favorite 0

I'm excited! We've got a new service to offer today. It's the SparkFun data channel over at data.sparkfun.com.

Over the last 6 months I've had multiple people tell me: What SparkFun really needs is a data channel! Like a good politician I told them thanks and that I'd look into it. I never really understood why a person would need to store temperature values into a database. That all changed when I built my weather station.

Weather Station in all its glory

After getting the hardware working I explored how to post data to Wunderground, the community driven site to "make quality weather information available to every person on this planet." Neat. But how do I push my weather data to them? Turns out there's a protocol for uploading data from your personal weather station to Wunderground.

Go ahead, try posting this link into a browser tab:

http://rtupdate.wunderground.com/weatherstation/updateweatherstation.php?ID=KCOBOULD115&PASSWORD=SparkFun&dateutc=2014-07-01+03%3A32%3A42&winddir=270&windspeedmph=7.0humidity=45.4&tempf=43.1&baromin=29.4161&realtime=1&rtfreq=10&action=updateraw

You should see a page with a single word success. You can then view the weather station data you just posted here.

I am not a software person. I can't setup up a server nor can I piece together how to spin one up in the cloud. Discovering that I could concatenate a series of strings and push live data to the Wunderground servers was a moment of complete awe. I can do this! This cloud thing! YES! Suddenly it was so easy to push regular data from my embedded system out to the world for public viewing.

But as I worked through the weather station project I wanted to monitor the battery level over time to see how the solar charger was performing. Unfortunately Wunderground doesn't accept custom fields. Once I understood that I could do an http post from an Electric Imp I suddenly wanted to be able to post data from all my projects. I floated the idea with a few other folks and the project was started.

Today we present Phant (an elephant never forgets and npm didn't have a project with that name). Now stick with me: Phant is the engine, data.sparkfun.com is our free hosting of that engine. This means you can push whatever data you want to our service. Here's an example:

https://data.sparkfun.com/input/Jxyjr7DmxwTD5dG1D1Kv?private_key=gzgnB4VazkIg7GN1g1qA&brewTemp=Too_Hot

Post this link into a browser then checkout the public feed. You did that. You! You posted data to the internet right this moment! Try changing the last bit to "Too_cold" or "56.2" or anything else. URLs can't have spaces so you might need to replace your spaces with %20 ("Happy%20Wednesday" for example).

On the Electric Imp pushing data to our service looks something like this:

part1 = "http://data.sparkfun.com/input/";
myKey = "Jxyjr7DmxwTD5dG1D1Kv";
part2 = "?private_key=";
privateKey = "gzgnB4VazkIg7GN1g1qA";
part3 = "&brewTemp=";
tempReading = thermistor.read();

bigString = part1 + myKey + part2 + privateKey + part3 + tempReading;

local request = http.get(bigString); //Push this data to SparkFun data channel

See the simplistic beauty here? All you have to do is string a bunch of sensor data together from whatever hardware you're using and throw a link out into the world. Phant never forgets them. And almost any embedded device can stick a bunch of strings and variables together!

Wait, wait, wait. I'm not buying these never-sharpen-knives. What's the catch?

Channel creation is free. We have rate limited the service to 100 updates every 15 minutes (that works out to one per 9 seconds), with a maximum data size of 50MB (we automatically throw out the older records 50kb at a time).

I don't trust you with my data.

That's cool. We feel the same way about Nest, Google, the NSA, and teeth fillings from time to time. Phant is totes open. You are free to copy and deploy your own version of phant onto any server you like so that you have complete control over your data. If you just need a place to stuff the water level on your aquarium consider data.sparkfun.com for now.

Is my data safe?

Most definitively probably. We're going to make every effort to maintain your data and provide an outstanding level of service. We'll try not to let you down but there is always a slight possibility of data loss. You can absolutely download your data at any time: CSV and JSON are currently supported.

We feel it's important to point out that SparkFun is currently privately held and we plan to keep it that way for the foreseeable future. There's nothing wrong with venture capital but being private means we don't have unlimited resource. Conversely, we don't have investors nagging us to sell or do bad things with your data. Consider the SparkFun data channel to be the neutral ground for devices to store data.

What is this phant thing?

alt text

Phant is short for elephant. Elephants are known for their remarkable recall ability, so it seemed appropriate to name a data logging project in honor of an animal that never forgets. Phant.io will give you all the information you need including documentation about creating, clearing, and deleting feeds as well as specifics about rate limits.

Why?

That's easy to answer! We needed it and we thought you could benefit from it too. All the other options we found were too convoluted to use and had strings attached. We're hoping you buy a Wifi module, a cell phone module, a RedBoard, an ethernet module or some other embedded device that captures data and needs place to put it.

For your next project consider creating a channel to log the data from your device. We sincerely hope you find it easy and powerful to use.

Checkout these tutorials for examples of how to get a widget reporting to data.sparkfun.com:


Comments 28 comments

  • ThorntonDan / about 10 years ago / 5

    Congrats on the new service, this is excellent and a very simple API to use! Any plans to add a graphing component? This data is neat but visualization would be a huge benefit.

    • todd wrote up a tutorial on using Google Charts for graphing. It should be fairly easy to pull data to a variety of third-party services.

      That said, we might work up something more directly integrated eventually. (And if anybody wants to tackle the problem, contributions are most definitely welcome.)

  • BSOD / about 10 years ago / 2

    Any chance you guys are going to release the code for your nifty front end website used on data.sparkfun.com?

  • NorthStreetLabs / about 10 years ago / 2

    This sounds epic, thanks!

  • rub0t / about 10 years ago / 2

    Very cool. I like the simplicity. Just curious, is the rate limit configurable if i were to install on my own server?

    • It certainly is! Since the core is all open source you just need to find the spot in code that does the throttling and tune it how you like or disable it entirely. This can be found in the core library: memory_throttler.js.

  • LED addict / about 10 years ago / 2

    That's awesome! I don't actually have any boards capable of utilizing it, but this is great plus for me to get one.

  • vegasloki / about 10 years ago / 2

    Well done gang. Way to think ahead of the curve.

  • Member #286247 / about 10 years ago * / 2

    Here is how I send the HTTP from the Imp....

    http.get(bigString).sendasync(function(resp)
    {
    if (resp.statuscode = 200)
    {
      server.log("SparkFun Success!");
    }
    else
    {
      server.log(format("%i: %s", resp.statuscode, resp.body));
    }
    }) 
    

  • schreiaj / about 10 years ago / 2

    Why not use an existing library like Cube and build a simple wrapper around their engine to support your desire to push to a simpler API?

    That being said, this looks cool and I'll, likely, be taking a look at the code.

    • todd / about 10 years ago / 3

      We wanted to be able to create a server that is very simple to install, and would run well in a production environment and on single board computers like the Raspberry Pi or BeagleBone Black. Cube requires MongoDB, which is a lot of overhead for this type of simple data logging. The default install of phant only requires node.js, so it's fairly easy to set up your own server on a Pi or BeagleBone Black.

      • schreiaj / about 10 years ago / 2

        So are you guys just writing to a file? (Sorry, just learning Node.js so digging through code is taking a while)

        • todd / about 10 years ago / 5

          By default it writes metadata about the stream (title, description, etc) using a file based db called nedb, and it appends the actual logged data to csv files that are split into 500k chunks. When the user requests their logged data, all of the files are stitched back together, converted into the requested format (JSON, CSV, etc), and streamed to the user's web client.

          For the production server, we are currently using MongoDb for metadata storage and CSV for logged data storage.

          • Wendo / about 10 years ago / 1

            Any chance of adding sqlite support for both metadata and logged data? Support for it is in pretty much all languages and its not going to chew up resources. It's also available on damn near every platform

            • Let's say there is definitely a nonzero chance, but I'm not sure how much magnitude to ascribe to it.

  • oooooooooooooooooo / about 10 years ago / 1

    So, this only appends the data to a csv file, but there are no plots and no aggregations? This is very disappointing, as there is already a mature and open source solution: thingspeak.

    I personally use rrdtool for this kind of data.

  • It didn't take them long to start spamming the public stream.

    • Yeah, this is not exactly a surprise. We might eventually need to rethink the basic example.

  • D_C / about 10 years ago / 1

    This is really cool.

    Though I do suggest adding a delete stream button on the stream view page as to help keep down excess data (i feel some, including me might end up being to lazy / forget about the delete string link and just leave the data up, oops in advance).

  • Wow thats awesome! The best part is that you can create your own private data server! Usability wise I still think dweet.io beats this. But yes I like that fancy page with the live data :)

  • Earlz / about 10 years ago / 1

    This reminds me of a weekend project I threw together called netbounce ( http://netbounce.earlz.net ). It might be seen as useful to some people where you don't need to store the data, but just need a live stream of it as it comes out. I personally used it for debugging an HTTP client API.

  • 172pilot / about 10 years ago / 1

    This is way cool.. I had implemented similar functionality at home by using the free version of SPLUNK so I could keep track of different sensors around the house (garage is open, living room motion detector, etc..) but this is a quick and easy way to get that data into the cloud I'll definitely be on the lookout for more ways to query the data... Having a splunk-like query tool would be AWESOME...

    You should consider offering this as a cloud service for anyone who wants to store more data on the SparkFun cloud, or keep it longer.

  • Nate, using your products, i built a weather station 2 years ago =) I did almost the same thing, but with graphs... madalozzo.it/dados.php There is some photos here: madalozzo.it/fotos.html It is in portuguese, but i think that everyone understand images and icons.

  • D_C / about 10 years ago / 1

    Nice!

    Also I tested with sending a set of digits from 0 to 9 to a stream using Python on my Raspberry Pi and they were separated by only 100ms! That is great.

    for those trying to do a random sending to a stream using Python, it is as simple as this...

    import urllib
    urllib.urlopen([Your URL Here])
    

    Random note... after creating a stream, it shows an example link with a data value of 13.61 but when clicked, it actually directs to a url with 21.79 as the data value :) confused me for a moment, lol

  • Scrubb / about 10 years ago * / 1

    Pretty cool. Congrats. If I read this right its like an open version of what xively.com is doing. I'm a fan. One other awesome service I use in conjunction with a data channel is zapier.com so I create thresholds and actions when data arrives. I'm looking forward to exploring data.sparkfun.com and sending you lots of trivial\^H\^H\^H\^H\^H\^H very important data!

  • trevor / about 10 years ago / 1

    For another perspective on how to connect a Raspberry Pi to data.sparkfun.com, praveen wrote a nice article entitled Raspberry Pi Pushing Data to the SparkFun Server.

    Trevor

Related Posts

Recent Posts

Tags


All Tags