Member Since: January 6, 2011

Country: United States


Spoken Languages


Programming Languages

C/C++, Java, Scheme, LabView


Worcester Polytechnic Institute class of 2010, BS in Mechanical Engineering and Robotics Engineering

  • Awesome stack, but I feel like there is one glaring flaw; Unless I am going blind, it looks like the USB connection runs of the same ‘side’ of the stack as all of the other ‘long’ boards. This would make it potentially impossible to plug in the stack directly to a USB port without disassembling it. While I realize one could simply use a USB Femal-male extender to get around this, I assume the whole point of not using a MicroUSB port was to allow direct plugging. Flipping around that USB board might be a thought for future iterations of the shield.

    Other than that, this seems like a pretty cool package. Kind of a like an ‘Edison Lite’.


    Uppon looking at it more, it looks like the control board is the only one with the ‘long’ end out of the other side. Meaning if you flipped the USB port you still wouldn’t be able to plug it in. A conundrum that. I’d be interested in what the reasoning was behind having the control board, not the USB plug, break the ‘which end is long’ convention.

  • We ship many robots to customers. But these aren’t $100 vacuum cleaners. They’re… quite a bit larger than that. An example: (disclosure: I do not work for CAT, I do work for a company that makes the autonomy system)

    Again, you need to be more specific with your scale. If we’re talking something the size of a Roomba, then of course you aren’t going to use a 200W computing system. But if we’re talking something the size of a car, or a small building, that’s a completely different story.

    And I have to disagree with you about the FPGA as well. I can’t really get into specifics because a lot of it is NDA’d, but lets say we can do a lot more for a lot less power (and with MUCH less latency) on an FPGA then we can on a GPU, although Tegra K1 and its successor might wind up changing that a bit. That’s not to say there aren’t lots of really useful uses for GPUs, there are just specific problems that lend themselves very, very well to FPGA implementations.

  • Sure, but then I can use a Jetson and get nearly 60FPS @ 480p color for only an extra Watt of power. And still have 4 Cortex A15’s to do things other than the image processing. My point is, there are much better solutions in terms of Performance/Watt for doing image processing than using an Atom that doesn’t have a GPU.

    Also, Drones/UAV’s fit pretty squarely into the category of ‘fast moving vehicles’. Speaking from experience, even on the ground 2FPS image data for navigation isn’t very useful if you’re moving much faster than 1-2m/s.

  • These are variants of the Silvermont core built on 22nm. Their Performance/Watt compares quite favorably to Cortex A15.

    The older Atoms were always very competitive on the actual CPU core, it was the rest of the chipset that was terrible. Silvermont fixed that a good bit by finally moving to legitimate SoC; Silvermont based tablets got comparable battery life to their Android/iOS equivalents, and Windows 8 is more resource-intensive than Android/iOS.

    Intel has many, many development boards targeted specifically at Smartphone/Tablet manifs. A board with no GPU is pretty much useless for any smartphone/tablet dev. Modern Android/Windows both require a GPU to even boot.

    Also, I think you’re severely mistaken if you think that no robotics company ships an x86 packing robot if they can help it. Nearly every robot at the company I work for runs at least one, frequently many more than one, Intel CPU. The power draw of the CPU is absolutely dwarfed by the power draw of the rest of the system. I think what you mean to say is that no small robot does, but really this has nothing to do with x86 itself and everything to do with there not being an x86 CPU with the requisite power envelope except for Atom, which has never really had a development platform suitable for being used in small-scale robots.

    As for you comment about GPUs, they have that already… they’re called FPGA’s/ASICs. You run a GPU when you want to be able to easily and rapidly develop software and the cost in power vs. an FPGA implementation is outweighed by the ease of using commodity tools like OpenCV or nVidia’s own CUDA-accelerated implementation. Even then, their Jetson board gets you a Tegra K1, which has a full Kepler SMX on it plus 4 Cortex A15’s in a 2W power envelope.

  • I think you’re a bit ambitious on stereo vision processing on anything greater than ~240p. OpenCV running on a 2.5GHz Core I7 can only manage a few FPS doing rectification and a disparity map on a 720p stereo image without GPU assistance. An Atom is definitely not going to keep up.

  • I think you missed the next 3 sentences of my post, it which I basically said exactly what you just did; vis that Edison gives you a lot more processing power, as well as the flexibility of the task-management capabilities of an embedded Linux distribution.

    Although you’d be pretty amazed what you can fit onto an ATMega equivalent; Mint’s robots fit an entire SLAM implementation into it with about 0.25m accuracy, although the task space is obviously only the size of a standard room.

  • I think Intel’s biggest failure in causing market confusion on exactly what this is for was not making it explicitly clear that there is no GPU, nor is there any way to get a video signal whatsoever out of the board. An SoC without a video signal is very clearly not intended to be interacted with directly as a computer. Heck, the only reason the Rasberry Pi is even remotely useful as an SBC with a CPU as weak as it has is because it has a very, very nice GPU.

    Once I realized that, it was immediately clear that this was intended to be more Arduino on Crack than a viable SBC competitor. The fact that its default programming mode is via the Arduino IDE should have been a pretty big hint, but confirming there was no GPU was basically a dead giveaway.

    As to people looking at clockspeed as a measure of relative performance… that’s just foolish. The ARM11 in the RasbPi is something like 400% slower clock per clock than the Atom, forget Performance/Watt. And there are two CPU cores in the Edison. Even with a 200MHz clockspeed difference, the Edison is still much, much faster at pure number crunching.

    The Edison is really meant for applications where you’d use a traditional micro-controller like an Arduino or bare ATMega/Cortex M, but need a lot more number crunching ability supplied by the much more powerful Atom cores, and the task-management facilities of a full embedded OS implementation. As you said, think robot control hardware/PLC applications/industrial automation.

  • Reading the datasheet, it almost certainly does for any kind of reasonable performance. They’re likely not integrated so that an OEM could design a more space-efficient/different alignment system if they desired.

  • The chart of breakdown by department would be a bit more impactful if it showed number of women / total employees in department. For example, if a company has 2 female engineers, but it’s out of 2, they’re definitely well above average. But if it’s 2 out of 100, well, you get the idea.

  • Unless I’m being stupid, I’m pretty sure the C++ specification does not guarantee that the extra elements you’ve initialized in your arrays will be 0 / ‘\0’ which is what you’re relying on for your strcmp call to work. Maybe the Arduino compiler does something different, but that code definitely wouldn’t be portable.

No public wish lists :(