Retrotechtacular: Electronic Publishing in the 1930s | Hackaday

Retrotechtacular: Electronic Publishing in the 1930s | Hackaday

We are living in the age of citizen journalism and the 24-hour news cycle. Reports about almost anything newsworthy can be had from many perspectives, both vetted and amateur.

Just a few decades ago, people relied on daily newspapers, radio, and word of mouth for their news. On the brink of the television age, several radio stations in the United States participated in an experiment to broadcast news over radio waves. But this was no ordinary transmission. At the other end, a new type of receiver printed out news stories, line drawings, and pictures on a long roll of paper.

Radio facsimile newspaper technology was introduced to the public at the 1939 World’s Fair at two different booths. One belonged to an inventor named William Finch, and one to RCA. Finch had recently made a name for himself with his talking newspaper, which embedded audio into a standard newspaper in the form of wavy lines along the edges that were read by a special device.

Machines, Lost In Translation: The Dream Of Universal Understanding : All Tech Considered : NPR

Machines, Lost In Translation: The Dream Of Universal Understanding : All Tech Considered : NPR

It was early 1954 when computer scientists, for the first time, publicly revealed a machine that could translate between human languages. It became known as the Georgetown-IBM experiment: an “electronic brain” that translated sentences from Russian into English.

The scientists believed a universal translator, once developed, would not only give Americans a security edge over the Soviets but also promote world peace by eliminating language barriers.

They also believed this kind of progress was just around the corner: Leon Dostert, the Georgetown language scholar who initiated the collaboration with IBM founder Thomas Watson, suggested that people might be able to use electronic translators to bridge several languages within five years, or even less.

The process proved far slower. (So slow, in fact, that about a decade later, funders of the research launched an investigation into its lack of progress.) And more than 60 years later, a true real-time universal translator — a la C-3PO from Star Wars or the Babel Fish from The Hitchhiker’s Guide to the Galaxy — is still the stuff of science fiction.

Stimulating Machines’ Brains

After decades of jumping linguistic and technological hurdles, the technical approach scientists use today is known as the neural network method, in which machines are trained to emulate the way people think — in essence, creating an artificial version of the neural networks of our brains.

Neurons are nerve cells that are activated by all aspects of a person’s environment, including words. The longer someone exists in an environment, the more elaborate that person’s neural network becomes.

With the neural network method, the machine converts every word into its simplest representation — a vector, the equivalent of a neuron in a biological network, that contains information not only about each word but about a whole sentence or text. In the context of machine learning, a science that has been developed over the years, a neural network produces more accurate results the more translations it attempts, with limited assistance from a human.

Though machines can now “learn” similarly to the way humans learn, they still face some limits, says Yoshua Bengio, a computer science professor at the University of Montreal who studies neural networks. One of the limits is the sheer amount of data required — children need far less to learn a language than machines do.


Brain–computer interface – Wikipedia, the free encyclopedia

Brain–computer interface – Wikipedia, the free encyclopedia

A brain–computer interface (BCI), sometimes called a mind-machine interface (MMI), direct neural interface (DNI), orbrain–machine interface (BMI), is a direct communication pathway between the brain and an external device. BCIs are often directed at assisting, augmenting, or repairing human cognitive or sensory-motor functions.

Research on BCIs began in the 1970s at the University of California, Los Angeles (UCLA) under a grant from the National Science Foundation, followed by a contract from DARPA.[1][2] The papers published after this research also mark the first appearance of the expression brain–computer interface in scientific literature.

Notes on — 512 Pixels

Source: Notes on — 512 Pixels

I’ve had an on-again, off-again thing with Evernote for years. I like having attachments associated with my notes, but dislike almost everything about the service itself.

That said, there’s a lot in Evernote that I don’t use. I don’t have IFTTT routing any content in, and I don’t ever forward emails to the system. I occasionally use the web clipper to save webpages to Evernote, but it’s nowhere near vital to my workflow.

The nerd in me really likes having my notes saved as text documents, written in Markdown. I’ve used Brett Terpstra’s excellent nvALT for years, too. My biggest problem is that I can’t ever seem to find a Dropbox-powered notes app on iOS that I like. Additionally, going text-only means I need to store assorted attachments elsewhere.

I’ve lived with this tension for years, migrating content back and forth between the two systems several times.

(I’ve also spent a lot of time in Simplenote, which I’ve liked for years. It’s fast, lightweight and reliable, but the lack of attachments means it has the same core problem as plain text.)

When Apple showed off iOS 9 and OS X El Capitan, the built-in Notes app got a lot of attention. Gone was the old, let’s-sync-via-IMAP-and-hope-for-the-best system. In its place, a more modern backend — powered by CloudKit — to an app with a lot more features than before.

The new Notes app allows users to style their text easily, add checklists, photos and even hand-drawn sketches. But is it any good?

In a word, yes.

There is a better way to read on the internet, and I have found it – Vox

There is a better way to read on the internet, and I have found it – Vox

Aesthetically, I prefer print to most digital text. The Kindle’s screen is crap at displaying photographs or charts, and while its e-ink text is easier on the eyes than an iPad, it’s harder on the eyes than a book. The gap only grows when it comes to reading most articles online: Magazines are still laid out with a care and thoughtfulness that even the best digital publishers can’t touch (except Vox, of course).

And yet I do virtually all my reading digitally, and for a simple reason: My memory is terrible. I forget 90 percent of what I read about 90 minutes after I read it. The Kindle’s highlights and notes are invaluable to me: I can find any passage that caught my eye, or any thought I cared enough to write down, anywhere that I happen to have an internet connection. Similarly, I use the online storage system Evernote to save passages or full articles I happen across online and may want to refer back to.

These storage solutions make everything I read more useful to me after I read it. My library goes from being inaccessible to being a sprawling digital memory. But both storage solutions are, to be honest, terrible. Amazon’s Kindle site feels like it was built in 2001: It stores your highlights and notes in the least useful ways possible, its search function is garbage, its user interface seems designed to frustrate, and it is extremely, exceptionally slow. Evernote’s text clipper is better, but it doesn’t work on my phone, which is where I end up doing a lot of my reading.

But all that’s in the past. I have figured out how to read online, and it is glorious.

In this, I am indebted to Diana Kimball, who developed this system for “a decent digital commonplace book system.”

Making Kindle highlights useable with and Evernote

It begins with Kindle. will export your Kindle notes and highlights in usable, searchable form — and then plug them directly into Evernote, so they’re available whenever you need them, and sortable in every way you might imagine. The difference here is profound: My Kindle highlights have gone from being available if I can remember what book they’re in to discoverable if I can simply remember any word from the highlight.

In practice, this means my relationship with highlighted passages and notes has gone from one in which I have to find them to one in which they can unexpectedly, wonderfully find me. A search for, say, “filibuster” will call up highlights and notes I wasn’t specifically looking for, and that I had actually forgotten, but that help with whatever I’m working on — and that sometimes prove to be the thing I should have been looking for in the first place.

Using Instapaper premium and Evernote to save article snippets

A lot of what I read, however, isn’t books. It’s news articles, blog posts, magazine features. I’ve long wanted a cleaner way to save the best ideas, facts, and quotes I come across. Now I have one.

Instapaper — which lets you save any article you find online and read it later on any device you choose — recently added a highlight function. The free version limits the number of highlights you can have to some absurdly low number. But if you pay for the premium service — $29.99 a year — it unlocks unlimited highlights.

That’s helpful, but there’s not much you can do with the highlights on Instapaper. But Kimball created an If That Then This recipe that automatically exports Instapaper highlights into Evernote. So now anything I highlight in an article — at least an article on Instapaper — is saved into the same searchable, sortable space that my book highlights inhabit. So basically anything I read in any digital format can be highlighted, and those highlights can be saved and searched. It’s wonderful.

IBM Design Language | Animation: Fundamentals

IBM Design Language | Animation: Fundamentals

Learn how IBM products move with the accuracy and precision of a machine.

For over one hundred years, IBM has crafted business machines for professionals around the world. From the powerful strike of a printing arm to the smooth slide of a typewriter carriage, each movement was fit for purpose and designed with intent. Our software demands the same attention to detail for making products feel lively and realistic.

We take inspiration from our heritage to define our animation style. Machines have solid planes, rigid surfaces and sharp, exact movements that are acted upon by physical forces. They don’t go from full-stop to top speed instantly or come to an abrupt stop, but instead take time to accelerate and decelerate. They have an inherent mass and move at different speeds in order to accomplish the tasks they were designed for.

Addicted to Distraction – The New York Times

Addicted to Distraction – The New York Times

“The net is designed to be an interruption system, a machine geared to dividing attention,” Nicholas Carr explains in his book “The Shallows: What the Internet Is Doing to Our Brains.” “We willingly accept the loss of concentration and focus, the division of our attention and the fragmentation of our thoughts, in return for the wealth of compelling or at least diverting information we receive.”

Addiction is the relentless pull to a substance or an activity that becomes so compulsive it ultimately interferes with everyday life. By that definition, nearly everyone I know is addicted in some measure to the Internet. It has arguably replaced work itself as our most socially sanctioned addiction.

Endless access to new information also easily overloads our working memory. When we reach cognitive overload, our ability to transfer learning to long-term memory significantly deteriorates. It’s as if our brain has become a full cup of water and anything more poured into it starts to spill out.

I’ve known all of this for a long time. I started writing about it 20 years ago. I teach it to clients every day. I just never really believed it could become so true of me.