×

SparkFun will be closed Monday 7/4 for the holiday. All orders placed after 2pm MT on Friday 7/1 will be shipped out next week.

avatar

tetsujin

Member Since: July 23, 2008

Country: United States

Profile

Interests

Model-making, programming, electronics

Websites

http://scope-eye.net

  • It’s a harsh reality that those of us who prefer working on other OSes frequently have to deal with - a lot of software is available on Windows, but not necessarily on other platforms. One can wind up needing to run Windows in order to run a certain program. It stinks but that’s life.

  • Generally I prefer to run Linux. It’s just the system I feel most at home in. I tried Mac OS X for a while, feeling like it would give me the means to run Unix software (since it’s got all that Unix stuff under the hood) but also a more polished user experience… It really wasn’t right for me. It wasn’t a great system for running the Unix software, as it turned out. Not in my opinion anyway. It was a lot like running Unix software through Cygwin on Windows - you can do it, and there are package repositories to help, but there’s not the same variety of packages you’d get with a real Linux distribution, and the nature of the integration is such that the Unix programs feel decidedly out-of-place, and second-class. (IMO Cygwin is actually better in that regard…) Windows mostly just agitates me, and it seems to get worse in that regard all the time. I can install Cygwin on it and mostly pretend it’s a Unix box, but there’s all these little things, like Cygwin’s insistence on remapping the filesystem through mounts, or the constant parade of system notifications and software update notifications, that pull me back to the reality of the system I’m on. Personally I believe that, despite the sort of “command-line machismo” that often comes with Unix fans, in fact all users need some “user-friendly” design when they encounter something new. But I think the prevailing notions of “user-friendly” are geared toward certain types of users (probably the majority), but not necessarily others. I think that’s part of what tends to bother me in Mac and Windows, the systems do more hand-holding than I’m comfortable with, and occasionally even obstruct me. Choice of software is a problem, of course: there are many useful tools that are only on Windows, as others have pointed out. In my hobby work the software options on Linux are usually enough for me, but there’s bound to be something from time to time that can’t be done on Linux because the software is Windows-only. It’s an unfortunate situation, but generally speaking I’d rather deal with that than run Windows.

  • On the other hand, I’d love to have a Lipo circuit do silly things to a Lipo battery. (A solderable jumper could be used to enable said silly things so they aren’t done to a AA pack by mistake) But that’s just not part of the plan for this particular device, I guess. Being primarily a beginner/tutorial board it’s not worth the added cost for a feature most beginners won’t be using.

  • There are advantages to using a separate chip for USB. In terms of the user experience, probably the main thing is that if the AVR itself is providing the USB implementation, you wind up losing the connection when the AVR resets. 32U4-based Arduino sketches that rely on the serial port for debugging often have to include code to wait until the device has been enumerated, because otherwise it’s very difficult to start up the board, let the PC enumerate the USB device, and connect a serial monitor to it in time to catch the debugging messages at the beginning of the program. The relative simplicity of an FTDI interface makes it a little bit better experience for people new to the platform. 32U4-based Arduino bootloaders also take up more program space than 328-based serial bootloaders (around 3.5KiB for the Leonardo bootloader compared to about 500 bytes for the Uno bootloader). As for price, the 32u4 is actually a bit expensive. On Mouser it’s about $3.50 per chip if you buy a full reel, compared to about $1.80 for a ‘328 when you buy a full reel. The FTDI chip is around $1.50 per unit when you buy a reel, so '328 + FTDI comes out a little bit ahead.

  • Paired with the Redstick, though, it’d be awkward to get 8 I/O on a single port anyway. The only 8-bit I/O port that’s fully available is port D (as two pins of port B are used for the oscillator). But two pins of port D are used for the UART, and (on the Redstick, at least) are physically separated from the rest of port D.

    …But yeah, can you even imagine setting those I/O lines one at a time like Arduino encourages you to? :)

  • Well the way you phrased the question sounded like you didn’t. If your comment about using a ninth pin was in there when I replied, I guess I missed it. Sorry, didn’t mean to explain something you already understood.

    Don’t know specifically what went into their design decisions there. I suppose they were trying to strike some balance between capability and economy, given that this was originally a give-away item.

  • It’s Charlie-plexed and uses 8 I/O lines. 8x7 is the maximum size under those conditions.

    Basically, to light a LED you choose one I/O line to apply positive voltage to, and one to apply negative voltage to. When you select the line for positive voltage it can be any of the 8. But when you select the one for the negative voltage, it can’t be the same one you chose for the positive voltage, so you only have 7 choices. Thus, 8x7.

  • I want to expand a bit on what I said in case there’s anybody who didn’t understand what I was getting at, but is curious:

    The top-level answer, as Mike said, is that the improvements in the ARMv8 core would make it faster than an ARMv7 core even at the same clock speed. This is a point of CPU architecture that can be a little bit hard to understand.

    First off, the registers: A computer has different levels of storage. Usually we think of the RAM as the “fast” storage and the disk (or SD card, etc.) as the “slow”, persistent storage. We load things from disk into RAM and often keep them there for a while, because it would slow things down if we had to keep storing to and retrieving from disk. (and when we run out of RAM, the OS starts swapping things to disk - which absolutely murders performance)

    The trick is, there are deeper layers that follow the same pattern. CPU cache is faster than the main RAM, and CPU registers are faster than the CPU cache. At this deepest level, you can think of the CPU registers as the set of data the CPU can work with very quickly, and the main RAM as the slower, longer-term data storage. (We don’t tend to think of RAM as being slow, but when you’re running some performance-intensive code, the difference is speed when you finally have to work with RAM is like hitting a brick wall) In this case, the new processor has (I think) more than twice as many registers, and they’re twice the size of those on the Pi 2. (31 general-purpose 64-bit registers vs. 13 general-purpose 32-bit registers)

    There’s also a separate set of registers used for floating-point math and SIMD operations, where a larger register is packed with smaller operands that are used together in some calculation: On the Pi 2 (Cortex-A7) there were (I believe) 16 64-bit floating point registers, and on the Pi 3 (Cortex-A53) there are 32 128-bit floating point registers.

    All this means that the amount of data the CPU can work with before it has to go fetch something from RAM is increased. But there’s a catch: Your programs have to be compiled in a way that takes advantage of the extra registers. Code compiled to use the full set of ARMv8 registers won’t run on ARMv7. It’ll just crash. And code that’s written to use just the ARMv7 (or v6) registers won’t be able to take advantage of the register set on the A53. We went through this with the PI 2 as well, actually: Raspbian (and most Pi-specific distros, I think) was compiled for the original Pi (ARMv6), and due to the bootloader trickery that’s required on the Pi, it’s not always easy to substitute another distribution. It’s kind of unfortunate, but taking advantage of the Pi 3’s CPU enhancements isn’t going to be easy.

    The Pi 3 will still be faster than the Pi 2, but we won’t really tap its full potential unless the software we run (including the OS, probably) is compiled specifically for ARMv8.

  • 64 bit pointers might not be too useful with 1GB of RAM, ‘cause even with 32-bit pointers that still leaves 3GB of virtual address space for the OS, memory mapping, swap, etc. It’s possible to havve virtual address space that is greater than all your actaul RAM but still run into problems because parts of that virtual address space are set aside for certain purposes. Hence on a 32-bit machine with 4GB of RAM a single process might only be able to address 2GB of that, because the other 2GB are set aside as the OS’s address space. Of course, other advantages of a 64-bit CPU apply even if addressing isn’t an issue. It’ll do 64-bit math faster, because the operands will each fit in a register and the operation can be expressed as a single instruction. And a huge advantage of x86-64 over regular x86, for example, is that the CPU registers aren’t just bigger, there’s actually more of them. This means compilers can optimize the code much more effectively, reducing the need to repeatedly fetch values from RAM which is crazy-slow. (I dont know if 64-bit ARM offers similar advantages.)

  • Hrm, I was hoping this would be a step up from the Teensy 3.2, but it seems to fall a bit short instead. Mainly in terms of RAM (half) and CPU speed (2/3)

No public wish lists :(