It’s Friday so relax and watch a hard drive defrag forever on Twitch

It’s been a while since I defragged — years, probably, because these days for a number of reasons computers don’t really need to. But perhaps it is we who need to defrag. And what better way to defrag your brain after a long week than by watching the strangely satisfying defragmentation process taking place on a simulated DOS machine, complete with fan and HDD noise?

That’s what you can do with this Twitch stream, which has defrag.exe running 24/7 for your enjoyment.

I didn’t realize how much I missed the sights and sounds of this particular process. I’ve always found ASCII visuals soothing, and there was something satisfying about watching all those little blocks get moved around to form a uniform whole. What were they doing down there on the lower right hand side of the hard drive anyway? That’s what I’d like to know.

Afterwards I’d launch a state of the art game like Quake 2 just to convince myself it was loading faster.

There’s also that nice purring noise that a hard drive would make (and which is recreated here). At least, I thought of it as purring. For the drive, it’s probably like being waterboarded. But I did always enjoy having the program running while keeping everything else quiet, perhaps as I was going to bed, so I could listen to its little clicks and whirrs. Sometimes it would hit a particularly snarled sector and really go to town, grinding like crazy. That’s how you knew it was working.

The typo is, no doubt, deliberate.

The whole thing is simulated, of course. There isn’t really just an endless pile of hard drives waiting to be defragged on decades-old hardware for our enjoyment (except in my box of old computer things). But the simulation is wonderfully complete, although if you think about it you probably never used DOS on a 16:9 monitor, and probably not at 1080p. It’s okay. We can sacrifice authenticity so we don’t have to windowbox it.

The defragging will never stop at TwitchDefrags, and that’s comforting to me. It means I don’t have to build a 98SE rig and spend forever copying things around so I have a nicely fragmented volume. Honestly they should include this sound on those little white noise machines. For me this is definitely better than whale noises.

The Automatica automates pour-over coffee in a charming and totally unnecessary way

Most mornings, after sifting through the night’s mail haul and skimming the headlines, I make myself a cup of coffee. I use a simple pour-over cone and paper filters, and (in what is perhaps my most tedious Seattleite affectation), I grind the beans by hand. I like the manual aspect of it all. Which is why this robotic pour-over machine is to me so perverse… and so tempting.

Called the Automatica, this gadget, currently raising funds on Kickstarter but seemingly complete as far as development and testing, is basically a way to do pour-over coffee without holding the kettle yourself.

You fill the kettle and place your mug and cone on the stand in front of it. The water is brought to a boil and the kettle tips automatically. Then the whole mug-and-cone portion spins slowly, distributing the water around the grounds, stopping after 11 ounces has been distributed over the correct duration. You can use whatever cone and mug you want as long as they’re about the right size.

Of course, the whole point of pour-over coffee is that it’s simple: you can do it at home, while on vacation, while hiking, or indeed at a coffee shop with a bare minimum of apparatus. All you need is the coffee beans, the cone, a paper filter — although some cones omit even that — and of course a receptacle for the product. (It’s not the simplest — that’d be Turkish, but that’s coffee for werewolves.)

Why should anyone want to disturb this simplicity? Well, the same reason we have the other 20 methods for making coffee: convenience. And in truth, pour-over is already automated in the form of drip machines. So the obvious next question is, why this dog and pony show of an open-air coffee bot?

Aesthetics! Nothing wrong with that. What goes on in the obscure darkness of a drip machine? No one knows. But this – this you can watch, audit, understand. Even if the machinery is complex, the result is simple: hot water swirls gently through the grounds. And although it’s fundamentally a bit absurd, it is a good-looking machine, with wood and brass accents and a tasteful kettle shape. (I do love a tasteful kettle.)

The creators say the machine is built to last “generations,” a promise which must of course be taken with a grain of salt. Anything with electronics has the potential to short out, to develop a bug, to be troubled by humidity or water leaks. The heating element may fail. The motor might stutter or a hinge catch.

But all that is true of most coffee machines, and unlike those this one appears to be made with care and high quality materials. The cracking and warping you can expect in thin molded plastic won’t happen to this thing, and if you take care of it it should at least last several years.

And it better, for the minimum pledge price that gets you a machine: $450. That’s quite a chunk of change. But like audiophiles, coffee people are kind of suckers for a nice piece of equipment.

There is of course the standard crowdfunding caveat emptor; this isn’t a pre-order but a pledge to back this interesting hardware startup, and if it’s anything like the last five or six campaigns I’ve backed, it’ll arrive late after facing unforeseen difficulties with machining, molds, leaks, and so on.

Movado Group acquires watch startup MVMT

The Movado Group, which sells multiple brands, including Lacoste, Tommy Hilfiger and Hugo Boss, has purchased MVMT, a small watch company founded by Jacob Kassan and Kramer LaPlante in 2013. The company, which advertised heavily on Facebook, logged $71 million in revenue in 2017. Movado purchased the company for $100 million.

“The acquisition of MVMT will provide us greater access to millennials and advances our Digital Center of Excellence initiative with the addition of a powerful brand managed by a successful team of highly creative, passionate and talented individuals,” Movado Chief Executive Efraim Grinberg said.

MVMT makes simple watches for the millennial market in the vein of Fossil or Daniel Wellington. However, the company carved out a niche by advertising heavily on social media and being one of the first microbrands with a solid online presence.

“It provides an opportunity to Movado Group’s portfolio as MVMT continues to cross-sell products within its existing portfolio, expand product offerings within its core categories of watches, sunglasses and accessories, and grow its presence in new markets through its direct-to-consumer and wholesale business,” said Grinberg.

MVMT is well-known as a “fashion brand,” namely a brand that sells cheaper quartz watches that are sold on style versus complexity or cost. Their pieces include standard three-handed models and newer quartz chronographs.

Tomu is a fingernail-sized computer that is easy to swallow

I’m a huge fan of single board computers, especially if they’re small enough to swallow. That’s why I like the Tomu. This teeny-tiny ARM processor essentially interfaces with your computer via the USB port and contains two LEDs and two buttons. Once it’s plugged in the little computer can simulate a hard drive or mouse, send MIDI data, and even blink quickly.

The Tomu runs the Silicon Labs Happy Gecko EFM32HG309 and can also act as a Universal 2nd Factor security token. It is completely open source and all the code is on their GitHub.

I bought one for $30 and messed with it for a few hours. The programs are very simple and you can load in various tools including a clever little mouse mover – maybe to simulate mouse usage for an app – and a little app that blinks the lights quickly. Otherwise you can use it to turn your USB hub into an on-off switch for your computer. It’s definitely not a fully-fledged computer – there are limited I/O options, obviously – but it’s a cute little tool for those who want to do a little open source computing.

One problem? It’s really, really small. I’d do more work on mine but I already lost it while I was clearing off a desk so I could see it better. So it goes.

VR optics could help old folks keep the world in focus

The complex optics involved with putting a screen an inch away from the eye in VR headsets could make for smartglasses that correct for vision problems. These prototype “autofocals” from Stanford researchers use depth sensing and gaze tracking to bring the world into focus when someone lacks the ability to do it on their own.

I talked with lead researcher Nitish Padmanaban at SIGGRAPH in Vancouver, where he and the others on his team were showing off the latest version of the system. It’s meant, he explained, to be a better solution to the problem of presbyopia, which is basically when your eyes refuse to focus on close-up objects. It happens to millions of people as they age, even people with otherwise excellent vision.

There are, of course, bifocals and progressive lenses that bend light in such a way as to bring such objects into focus — purely optical solutions, and cheap as well, but inflexible, and they only provide a small “viewport” through which to view the world. And there are adjustable-lens glasses as well, but must be adjusted slowly and manually with a dial on the side. What if you could make the whole lens change shape automatically, depending on the user’s need, in real time?

That’s what Padmanaban and colleagues Robert Konrad and Gordon Wetzstein are working on, and although the current prototype is obviously far too bulky and limited for actual deployment, the concept seems totally sound.

Padmanaban previously worked in VR, and mentioned what’s called the convergence-accommodation problem. Basically, the way that we see changes in real life when we move and refocus our eyes from far to near doesn’t happen properly (if at all) in VR, and that can produce pain and nausea. Having lenses that automatically adjust based on where you’re looking would be useful there — and indeed some VR developers were showing off just that only 10 feet away. But it could also apply to people who are unable to focus on nearby objects in the real world, Padmanaban thought.

This is an old prototype, but you get the idea.

It works like this. A depth sensor on the glasses collects a basic view of the scene in front of the person: a newspaper is 14 inches away, a table three feet away, the rest of the room considerably more. Then an eye-tracking system checks where the user is currently looking and cross-references that with the depth map.

Having been equipped with the specifics of the user’s vision problem, for instance that they have trouble focusing on objects closer than 20 inches away, the apparatus can then make an intelligent decision as to whether and how to adjust the lenses of the glasses.

In the case above, if the user was looking at the table or the rest of the room, the glasses will assume whatever normal correction the person requires to see — perhaps none. But if they change their gaze to focus on the paper, the glasses immediately adjust the lenses (perhaps independently per eye) to bring that object into focus in a way that doesn’t strain the person’s eyes.

The whole process of checking the gaze, depth of the selected object and adjustment of the lenses takes a total of about 150 milliseconds. That’s long enough that the user might notice it happens, but the whole process of redirecting and refocusing one’s gaze takes perhaps three or four times that long — so the changes in the device will be complete by the time the user’s eyes would normally be at rest again.

“Even with an early prototype, the Autofocals are comparable to and sometimes better than traditional correction,” reads a short summary of the research published for SIGGRAPH. “Furthermore, the ‘natural’ operation of the Autofocals makes them usable on first wear.”

The team is currently conducting tests to measure more quantitatively the improvements derived from this system, and test for any possible ill effects, glitches or other complaints. They’re a long way from commercialization, but Padmanaban suggested that some manufacturers are already looking into this type of method and despite its early stage, it’s highly promising. We can expect to hear more from them when the full paper is published.

XYZPrinting announces the da Vinci Color Mini

XYZPrinting may have finally cracked the color 3D printing code. Their latest machine, the $1,599 da Vinci Color Mini is a full color printer that uses three CMY ink cartridges to stain the filament as it is extruded, allowing for up to 15 million color combinations.

The printer is currently available for pre-order on Indiegogo for $999.

The printer can build objects 5.1″ x 5.1″ x 5.1″ in size and it can print PLA or PETG. A small ink cartridge stains the 3D Color-inkjet PLA as it comes out, creating truly colorful objects.

“Desktop full-color 3D printing is here. Now, consumers can purchase an easy-to-operate, affordable, compact full-color 3D printer for $30,000 less than market rate. This is revolutionary because we are giving the public access to technology that was once only available to industry professionals,” said Simon Shen, CEO of XYZprinting.

The new system is aimed at educational and home markets and, at less than a $1,000, it hits a unique and important sweet spot in terms of price. While the prints aren’t perfect, being able to print in full color for the price of a nicer single color 3D printer is pretty impressive.

Fitbit’s upcoming Charge 3 to sport full touchscreen, per leak

This appears to be the Fitbit Charge 3 and, if it is, several big changes are in the works for Fitbit’s premier fitness tracker band.

The leak comes from Android Authority which points to the changes. First, the device has a full touchscreen rather than a clunky quasi-touchscreen like the Charge 2. From the touchscreen, users can navigate the device and even reply to notifications and messages. Second, the Charge 3 will be swim-proof to 50 meters. Finally, and this is a bad one, the Charge 3 will not have GPS built-in meaning users will have to bring a smartphone along for a run if they want GPS data.

Price and availability was not reveled but chances are the device will hit the stores in the coming weeks ahead of the holidays.

This is a big change for Fitbit. If the above leak is correct on all points, Fitbit is pushing the Charge 3 into smartwatch territory. The drop of GPS is regrettable but the company probably has data showing a minority of wearers use the feature. With a full touchscreen, and a notification reply function, the Charge 3 is gaining a lot of functionality for its size.

This robot maintains tender, unnerving eye contact

Humans already find it unnerving enough when extremely alien-looking robots are kicked and interfered with, so one can only imagine how much worse it will be when they make unbroken eye contact and mirror your expressions while you heap abuse on them. This is the future we have selected.

The Simulative Emotional Expression Robot, or SEER, was on display at SIGGRAPH here in Vancouver, and it’s definitely an experience. The robot, a creation of Takayuki Todo, is a small humanoid head and neck that responds to the nearest person by making eye contact and imitating their expression.

It doesn’t sound like much, but it’s pretty complex to execute well, which, despite a few glitches, SEER managed to do.

At present it alternates between two modes: imitative and eye contact. Both, of course, rely on a nearby (or, one can imagine, built-in) camera that recognizes and tracks the features of your face in real time.

In imitative mode the positions of the viewer’s eyebrows and eyelids, and the position of their head, are mirrored by SEER. It’s not perfect — it occasionally freaks out or vibrates because of noisy face data — but when it worked it managed rather a good version of what I was giving it. Real humans are more expressive, naturally, but this little face with its creepily realistic eyes plunged deeply into the uncanny valley and nearly climbed the far side.

Eye contact mode has the robot moving on its own while, as you might guess, making uninterrupted eye contact with whoever is nearest. It’s a bit creepy, but not in the way that some robots are — when you’re looked at by inadequately modeled faces, it just feels like bad VFX. In this case it was more the surprising amount of empathy you suddenly feel for this little machine.

That’s largely due to the delicate, childlike, neutral sculpting of the face and highly realistic eyes. If an Amazon Echo had those eyes, you’d never forget it was listening to everything you say. You might even tell it your problems.

This is just an art project for now, but the tech behind it is definitely the kind of thing you can expect to be integrated with virtual assistants and the like in the near future. Whether that’s a good thing or a bad one I guess we’ll find out together.