Connect with us

Consumer electronics

In 1989, General Magic Saw the Future of Smartphones

Published

on

Often a layout is so completely depictive of its time that to see it brings long-forgotten memories swamping back. The interface of the Motorola Envoy does that for me, despite the fact that I never ever possessed one, or undoubtedly any kind of individual electronic aide. Theres simply something regarding the Envoys bitmapped grayscale symbols that shrieks 1990s, a time when we got on the cusp of the Web boom yet didnt yet recognize what that implied.


The Motorola Agent was an apotheosis of skeuomorphic layout

Open the Agent, and also the house display includes a tableau of a regular workplace circa 1994. On your grayscale workdesk rests a telephone (a landline, certainly), a Rolodex, a note pad, and also a schedule. Behind the workdesk are a wall surface clock, in- and also out-boxes, and also a declaring closet. Its an accomplishment in skeuomorphic layout.

Skeuomorphism is a term utilized by icon developers to define GUI things that simulate their real-world equivalents; click the telephone to telephone, click the schedule to make a consultation. In 1994, when the Agent debuted, the layout was so user-friendly that lots of individuals did not require to get in touch with the customer guidebook to begin utilizing their brand-new tool.

Regarding the dimension of a book and also evaluating in at 0.77 kilos (1.7 extra pounds), the Agent was a little as well huge to suit your pocket. It had a 7.6-by-11.4- centimeter LCD display, which reviewers at the time kept in mind was not backlit. The tool featured 1 megabyte of RAM, 4 MEGABYTES of ROM, an integrated 4,800-bit-per-second radio modem, a fax and also information modem, and also an infrared transceiver.

The Agent was among the very first portable computer systems developed to run the Magic Cap (brief for Connecting Applications System) os. It utilized the allegory of a space to arrange applications and also assist individuals browse with the numerous choices. For a lot of service individuals, the Workplace with its default workdesk was the primary user interface. The customer might likewise browse to the online Hallwaycomplete with wall surface art and also furnitureand after that go into various other areas, consisting of the Recreation room, Living Space, Storage Room, and also Control Space. Each space included its very own applications.

A grayscale graphical user interface shows a desktop with icons for a phone, Rolodex, calendar, and other common office implements.
The Motorola Envoys icon was based upon skeuomorphic layout, in which online things resemble their real-world equivalents and also recommend their usages. Cooper Hewitt, Smithsonian Layout Gallery

A control bar throughout all-time low of the display helped in navigating. The workdesk switch, the matching of a residence web link, returned the customer to the Workplace. The stamp provided ornamental aspects, consisting of smileys, which were after that a brand-new principle. The magic light admitted to browse, print, fax, and also mail commands. A symbol that appears like a bag, yet was called a shoulder bag, acted as a holding location for duplicated message that might after that be reached various other applications, comparable to your computer systems clipboard. The device caddy conjured up illustration and also modifying choices. The key-board switch raised an onscreen key-board, an advancement extensively duplicated by later Personal organizers and also mobile phones.

Skeuomorphic layout started to subside in the mid-2000s, as Microsoft, Google, and also Apple welcomed level layout. A minimal reaction to skeuomorphism, level layout focused on two-dimensional aspects and also brilliant shades. Gone were unnecessary computer animation and also 3D impacts. Apples wastebasket and also Windows reusing container are 2 skeuomorphic symbols that endured. (Agent had a trash vehicle on its toolbar for that function.)

Component of the change far from skeuomorphism was simply practical; as tools included much more applications and also attributes, developers required a cleaner display screen to arrange info. As well as the hectic development of both physical and also electronic innovations promptly resulted in out-of-date symbols. Does anybody still utilize a Rolodex to keep call info or a saggy disc to conserve information? As their real-world equivalents lapsed, the skeuomorphic matchings looked antique.

The Envoys interface is just one of the reasons that the things visualized at leading discovered its means to the collections of the Cooper Hewitt, Smithsonian Design Museum, in New York City City. Protecting and also showing the Envoys capability a quarter century after its prime time offered an unique obstacle. Ben Fino-Radin, creator and also lead conservator at Small Data Industries, dealt with the electronic preservation of the Agent and also composed an instructive blog post regarding it. Galleries have centuries worth of experience preserving physical things, yet catching the special 1994 feeling of a software program layout needed brand-new technological competence. Small Information Industries wound up buying a 2nd Agent on ebay.com in order to deconstruct it, examine the interior parts, and also turn around designer just how it functioned.

Just How Basic Magic both stopped working and also did well

Although the Envoys user interface is what caught my passion and also made me pick it for this months column, that is not why the Agent is precious of computer system chroniclers and also retro-tech fanatics. Instead, it is the business behind the Agent, General Magic, that remains to captivate.

General Magic is taken into consideration a timeless instance of a Silicon Valley brave failing. That is, if you specify the forerunner to the smart device and also a layout group whose participants later on brought us the iPod, apple iphone, Android, ebay.com, Dreamweaver, Apple Watch, and also Nest as failings.

The tale of General Magic starts at Apple in 1989, when Costs Atkinson, Andy Hertzfeld, and also Marc Porat, all experts of the Macintosh advancement group, began dealing with theParadigm project They attempted to persuade Apple chief executive officer John Sculley that the following huge point was a marital relationship of interactions and also customer electronic devices personified in a portable tool. After regarding 9 months, the group was not discovering the assistance it desired within Apple, and also Porat persuaded Sculley to rotate it off as an independent business, with Apple preserving a 10 percent risk.

In 1990, General Magic began its procedures with an ambitious mission statement:

We have a desire for boosting the lives of lots of countless individuals through tiny, intimate life support group that individuals lug with them anywhere. These systems will certainly assist individuals to arrange their lives, to interact with other individuals, and also to accessibility info of all kinds. They will certainly be straightforward to utilize, and also be available in a vast array of versions to fit every spending plan, requirement, and also preference. They will certainly alter the means individuals live and also interact.

Pretty spirituous things.

General Magic promptly ended up being the most popular trick in Silicon Valley. The business valued discretion and also nondisclosure arrangements to maintain its skill from dripping, yet as popular designers signed up with the group, expectancy of success maintained structure. General Magic tattooed collaborations with Sony, Motorola, AT&T, Matsushita, and also Philips, each bringing a particular competence to the table.

At its heart, General Magic was trying to change individual interactions. A rival to the Motorola Agent that likewise utilized Magic Cap, Sonys Magic Link, had a phone jack and also might link to the AT&T PersonaLink Solution network by means of a dial-up modem; it likewise had integrated accessibility to the America Online network. The Agent, on the various other hand, had an antenna to link to the ARDIS (Advanced Radio Information Info Solution) network, the very first cordless information network in the USA. Created in 1983 by Motorola and also IBM, ARDIS had questionable information insurance coverage, its rates were slow-moving (no more than 19.2 kilobits per second), and also expenses were high. The Agent originally cost United States $1,500, yet month-to-month information costs might run $400 or even more. Neither the Magic Web Link neither the Agent were business successes.

Bunnies stroll totally free to assist stimulate creative thinking, individual health appears optional, and also drawing all-nighters is the standard.

Maybe it was the hubris prior to the autumn, or possibly the General Magic group really thought that they were embarking on something historical, yet the business enabled docudrama filmmaker David Hoffman to tape conferences and also interview its workers. Filmmakers Sarah Kerruish, Matt Maude, and also Michael Stern took this historical bonanza and also transformed it right into the prize-winning 2018 docudramaGeneral Magic

The initial video footage completely catches the power and also drive of a 1990s start-up. Bunnies stroll the workplace to assist stimulate creative thinking, individual health appears optional, and also drawing all-nighters is the standard. Youthful designers create their very own variations of the USB and also touch displays in order to recognize their desires.

The movie likewise reveals a business so captured up in a vision of the future that it falls short to see the globe transforming around itspecifically the development of the Internet. As General Magic starts to miss out on target dates and also its items do not measure up to their buzz, the business fails and also enters into insolvency.

Yet the tale does not finish there. The actors of personalities proceed to various other tasks that show much more exceptional than Magic Cap and also the Agent. Tony Fadell, that had actually signed up with General Magic right after university, takes place to create the iPod, coinvent the apple iphone, and also discovered Nest (currently Google Nest). Kevin Lynch, a celebrity Mac software application designer when he signed up with General Magic, leads the group that establishes Dreamweaver (currently an Adobe item) and also works as lead designer on the Apple Watch. Megan Smith, an item layout lead at General Magic, later on comes to be primary modern technology police officer in the Obama management.

Marc Porat had actually tested his group to develop an item that when you utilize it, you wont have the ability to live without it. General Magic disappointed that mark, yet it brushed a staff of designers and also developers that took place to provide those cant-live-without-it tools.

Component of a proceeding collection taking a look at pictures of historic artefacts that accept the limitless capacity of modern technology.

A concise variation of this short article shows up in the January 202 2 print concern as Ode to the Agent

From Your Website Articles

Associated Articles Around the Internet

Continue Reading
Click to comment

Leave a Reply

apple

Did Apple Really Embrace Right-to-Repair?

Published

on

By

IEEE Range just recently talked to Kyle Wiens, founder of iFixit, which supplies repair service components as well as support for Apple tools to name a few, concerning Apple’s announcement last month that it would certainly give customers with alternatives to fix their develops themselves.

Range: Take a minute to define the circumstance prior to a number of weeks earlier, when Apple revealed an adjustment of plan on self-repair. Intend I got a brand-new Apple phone as well as I remained on it as well as fractured the display as well as chose I intended to fix it. What could I do?

Wiens: Apple essentially gave no choice for that. They have actually headed out of their means to avoid individuals from doing that sort of repair service. So your choice is to head to a 3rd party– a company, like iFixit. We have actually been operating in spite of Apple. Apple’s recognized for pursuing independent components firms for hallmark infractions which example. Apple did not make any kind of solution details readily available. They created the item to be glued with each other, so it’s difficult to service. They do not market components. That’s been the state of points.

Yet the only choice had not been simply a manufacturing facility repair service– to take it back to the Apple Shop. A minimum of they appear to be stating that they provided independent service center. Was that not real prior to?

Yes, concerning a year ago Apple released a program that they call IRP as well as for Independent Repair Provider program. It is a device for neighborhood stores to obtain accessibility to Apple components as well as devices. Yet there are several catches. The largest catch is that the agreement you need to authorize needs you to transform your consumer information over the Apple. The majority of stores that we understand of have actually not wanted to authorize that: If you do, you authorize away your spirit.

After That on November 17th, Apple makes a news. What did they state? And also did this come as a shock?

It was certainly a shock. They claimed that they are mosting likely to begin making details as well as components readily available straight to customers to be able to repair their very own tool, beginning with the apple iphone 12 as well as 13 and after that possibly increasing to various other tools in the future. It hasn’t released yet. They claimed it’s mosting likely to begin very early following year. This is a huge modification: Apple has actually never ever published the solution handbook for an apple iphone prior to.

Black and white photo of a man in glasses and a cowboy hat holding a giant wrench.
Kyle Wiens, founder of iFixit as well as a champ of the right to fix.iFixit

Why currently? Is it as a result of the sort of lobbying you’ve been doing?

It’s clear that this remains in feedback to stress from legislators as well as the Federal Trade Commission, which has actually beeninvestigating this So there was stress originating from all sides. They are attempting to sort of be successful of it.

Is it your feeling that they’re truly attempting to obtain repair service components right into individuals’s hands at reasonable rates– that this stands for an adjustment in their approach. Or do you believe they plan simply to make repair service components readily available theoretically to ensure that they please any kind of future laws?

I believe it’s going be a little of both. Yet we’ll need to wait as well as see. After twenty years of seeing them put on hold repair service alternatives every which way, I have actually obtained some suspicion. Yet they’re mosting likely to make the solution handbook readily available openly. That’s a big action. That’s specifically the appropriate point to do.

There is, nevertheless, a catch with the software application that they’re stating they’re mosting likely to give: They’re stating that you’re mosting likely to need to acquire the component from Apple in order to utilize the software application to “set” the component.

Inform me concerning this pairing of components that obtains performed in the Apple tools.

This is the completely brand-new idea that Apple’s sort of developing. It’s an additional means for them to maintain control of points as well as it’s sort of book. Visualize you had 2 coffee machine as well as you intended to take the container from one coffee machine as well as utilize it the various other one, yet you could not, unless you have the supplier’s approval. Apple has actually been doing it with the huge parts that you require to fix a phone. To ensure that’s the battery, the display, as well as the cam.

So I could not take a battery out of a phone that I remained on as well as place it right into a functioning phone of the similar design that has a weak battery?

That’s the suggestion. I can not state that 100% the situation. You still can do that today, yet you obtain cautions– essentially the matching of a check-engine light. You need to have Apple’s true blessing as well as approval to transform that off.

So this is a little like printer ink cartridges, where firms place a contribute the cartridge to ensure that you could not acquire an aftermarket substitute cartridge.

It’s even worse: It resembles stating if I have 2 the same printers, I can not exchange the cartridges in between them, also if they’re both real cartridges. You can not recover components in this regimen. And also this is what every one of the recyclers do. They might utilize 10 busted phones to make 3 of them function.

Probably Apple themselves can do this if they desire.

Yes. They’re simply not allowing anyone else do it. It’s an entirely approximate limitation.

Is this an issue that like the Federal Profession Payment acknowledges?

It’s certainly something that they’re watching on. And also it’s something that we’re concentrating on in our initiatives to advertise right-to-repair regulations. You likewise have the European federal governments are taking a look at this. Australian federal government simply the other day launched a site 400-page report checking into the general repair service circumstance, as well as pairing certainly showed up. So there will certainly be stress on several fronts if Apple firmly insists that they need to honor each repair service. I do not believe that’s mosting likely to fly with federal governments.

Intend that the future regulations as well as laws call for all electronic devices producers to make components readily available as well as guidebooks readily available at a reasonable as well as sensible cost. There’s a great deal of shake space in what you think about “a reasonable as well as sensible cost” as well as what you think about “a component.” Can you see producers dragging their feet, making it so you still can not obtain repair service components fairly?

They absolutely could. And also we anticipate Apple to value points in a fashion that’s not actually available. What we have actually seen with their independent repair service program is that they’re billing the service center the very same cost for the component that you pay when enter into an Apple shop as well as spend for the full repair service.

That’s why we actually require a perspective change. I imply, we’re dragging these firms along kicking as well as yelling. We wish to see is them accept the design as well as locate a manner in which like make it in fact benefit everyone.

Continue Reading

Consumer devices

We Need Software Updates Forever

Published

on

By

Fortunately for such artificial neural networkslater rechristened “deep learning” when they included extra layers of neuronsdecades of
Moore’s Law and other improvements in computer hardware yielded a roughly 10-million-fold increase in the number of computations that a computer could do in a second. So when researchers returned to deep learning in the late 2000s, they wielded tools equal to the challenge.

These more-powerful computers made it possible to construct networks with vastly more connections and neurons and hence greater ability to model complex phenomena. Researchers used that ability to break record after record as they applied deep learning to new tasks.

While deep learning’s rise may have been meteoric, its future may be bumpy. Like Rosenblatt before them, today’s deep-learning researchers are nearing the frontier of what their tools can achieve. To understand why this will reshape machine learning, you must first understand why deep learning has been so successful and what it costs to keep it that way.

Deep learning is a modern incarnation of the long-running trend in artificial intelligence that has been moving from streamlined systems based on expert knowledge toward flexible statistical models. Early AI systems were rule based, applying logic and expert knowledge to derive results. Later systems incorporated learning to set their adjustable parameters, but these were usually few in number.

Today’s neural networks also learn parameter values, but those parameters are part of such flexible computer models thatif they are big enoughthey become universal function approximators, meaning they can fit any type of data. This unlimited flexibility is the reason why deep learning can be applied to so many different domains.

The flexibility of neural networks comes from taking the many inputs to the model and having the network combine them in myriad ways. This means the outputs won’t be the result of applying simple formulas but instead immensely complicated ones.

For example, when the cutting-edge image-recognition system
Noisy Student converts the pixel values of an image into probabilities for what the object in that image is, it does so using a network with 480 million parameters. The training to ascertain the values of such a large number of parameters is even more remarkable because it was done with only 1.2 million labeled imageswhich may understandably confuse those of us who remember from high school algebra that we are supposed to have more equations than unknowns. Breaking that rule turns out to be the key.

Deep-learning models are overparameterized, which is to say they have more parameters than there are data points available for training. Classically, this would lead to overfitting, where the model not only learns general trends but also the random vagaries of the data it was trained on. Deep learning avoids this trap by initializing the parameters randomly and then iteratively adjusting sets of them to better fit the data using a method called stochastic gradient descent. Surprisingly, this procedure has been proven to ensure that the learned model generalizes well.

The success of flexible deep-learning models can be seen in machine translation. For decades, software has been used to translate text from one language to another. Early approaches to this problem used rules designed by grammar experts. But as more textual data became available in specific languages, statistical approachesones that go by such esoteric names as maximum entropy, hidden Markov models, and conditional random fieldscould be applied.

Initially, the approaches that worked best for each language differed based on data availability and grammatical properties. For example, rule-based approaches to translating languages such as Urdu, Arabic, and Malay outperformed statistical onesat first. Today, all these approaches have been outpaced by deep learning, which has proven itself superior almost everywhere it’s applied.

So the good news is that deep learning provides enormous flexibility. The bad news is that this flexibility comes at an enormous computational cost. This unfortunate reality has two parts.

A chart with an arrow going down to the right

A chart showing computations, billions of floating-point operations
Extrapolating the gains of recent years might suggest that by
2025 the error level in the best deep-learning systems designed
for recognizing objects in the ImageNet data set should be
reduced to just 5 percent [top]. But the computing resources and
energy required to train such a future system would be enormous,
leading to the emission of as much carbon dioxide as New York
City generates in one month [bottom].
SOURCE: N.C. THOMPSON, K. GREENEWALD, K. LEE, G.F. MANSO

The first part is true of all statistical models: To improve performance by a factor of
k, at least k2 more data points must be used to train the model. The second part of the computational cost comes explicitly from overparameterization. Once accounted for, this yields a total computational cost for improvement of at least k4. That little 4 in the exponent is very expensive: A 10-fold improvement, for example, would require at least a 10,000-fold increase in computation.

To make the flexibility-computation trade-off more vivid, consider a scenario where you are trying to predict whether a patient’s X-ray reveals cancer. Suppose further that the true answer can be found if you measure 100 details in the X-ray (often called variables or features). The challenge is that we don’t know ahead of time which variables are important, and there could be a very large pool of candidate variables to consider.

The expert-system approach to this problem would be to have people who are knowledgeable in radiology and oncology specify the variables they think are important, allowing the system to examine only those. The flexible-system approach is to test as many of the variables as possible and let the system figure out on its own which are important, requiring more data and incurring much higher computational costs in the process.

Models for which experts have established the relevant variables are able to learn quickly what values work best for those variables, doing so with limited amounts of computationwhich is why they were so popular early on. But their ability to learn stalls if an expert hasn’t correctly specified all the variables that should be included in the model. In contrast, flexible models like deep learning are less efficient, taking vastly more computation to match the performance of expert models. But, with enough computation (and data), flexible models can outperform ones for which experts have attempted to specify the relevant variables.

Clearly, you can get improved performance from deep learning if you use more computing power to build bigger models and train them with more data. But how expensive will this computational burden become? Will costs become sufficiently high that they hinder progress?

To answer these questions in a concrete way,
we recently gathered data from more than 1,000 research papers on deep learning, spanning the areas of image classification, object detection, question answering, named-entity recognition, and machine translation. Here, we will only discuss image classification in detail, but the lessons apply broadly.

Over the years, reducing image-classification errors has come with an enormous expansion in computational burden. For example, in 2012
AlexNet, the model that first showed the power of training deep-learning systems on graphics processing units (GPUs), was trained for five to six days using two GPUs. By 2018, another model, NASNet-A, had cut the error rate of AlexNet in half, but it used more than 1,000 times as much computing to achieve this.

Our analysis of this phenomenon also allowed us to compare what’s actually happened with theoretical expectations. Theory tells us that computing needs to scale with at least the fourth power of the improvement in performance. In practice, the actual requirements have scaled with at least the
ninth power.

This ninth power means that to halve the error rate, you can expect to need more than 500 times the computational resources. That’s a devastatingly high price. There may be a silver lining here, however. The gap between what’s happened in practice and what theory predicts might mean that there are still undiscovered algorithmic improvements that could greatly improve the efficiency of deep learning.

To halve the error rate, you can expect to need more than 500 times the computational resources.

As we noted, Moore’s Law and other hardware advances have provided massive increases in chip performance. Does this mean that the escalation in computing requirements doesn’t matter? Unfortunately, no. Of the 1,000-fold difference in the computing used by AlexNet and NASNet-A, only a six-fold improvement came from better hardware; the rest came from using more processors or running them longer, incurring higher costs.

Having estimated the computational cost-performance curve for image recognition, we can use it to estimate how much computation would be needed to reach even more impressive performance benchmarks in the future. For example, achieving a 5 percent error rate would require 10
19 billion floating-point operations.

Important work by scholars at the University of Massachusetts Amherst allows us to understand the economic cost and carbon emissions implied by this computational burden. The answers are grim: Training such a model would cost US $100 billion and would produce as much carbon emissions as New York City does in a month. And if we estimate the computational burden of a 1 percent error rate, the results are considerably worse.

Is extrapolating out so many orders of magnitude a reasonable thing to do? Yes and no. Certainly, it is important to understand that the predictions aren’t precise, although with such eye-watering results, they don’t need to be to convey the overall message of unsustainability. Extrapolating this way
would be unreasonable if we assumed that researchers would follow this trajectory all the way to such an extreme outcome. We don’t. Faced with skyrocketing costs, researchers will either have to come up with more efficient ways to solve these problems, or they will abandon working on these problems and progress will languish.

On the other hand, extrapolating our results is not only reasonable but also important, because it conveys the magnitude of the challenge ahead. The leading edge of this problem is already becoming apparent. When Google subsidiary
DeepMind trained its system to play Go, it was estimated to have cost $35 million. When DeepMind’s researchers designed a system to play the StarCraft II video game, they purposefully didn’t try multiple ways of architecting an important component, because the training cost would have been too high.

At
OpenAI, an important machine-learning think tank, researchers recently designed and trained a much-lauded deep-learning language system called GPT-3 at the cost of more than $4 million. Even though they made a mistake when they implemented the system, they didn’t fix it, explaining simply in a supplement to their scholarly publication that “due to the cost of training, it wasn’t feasible to retrain the model.”

Even businesses outside the tech industry are now starting to shy away from the computational expense of deep learning. A large European supermarket chain recently abandoned a deep-learning-based system that markedly improved its ability to predict which products would be purchased. The company executives dropped that attempt because they judged that the cost of training and running the system would be too high.

Faced with rising economic and environmental costs, the deep-learning community will need to find ways to increase performance without causing computing demands to go through the roof. If they don’t, progress will stagnate. But don’t despair yet: Plenty is being done to address this challenge.

One strategy is to use processors designed specifically to be efficient for deep-learning calculations. This approach was widely used over the last decade, as CPUs gave way to GPUs and, in some cases, field-programmable gate arrays and application-specific ICs (including Google’s
Tensor Processing Unit). Fundamentally, all of these approaches sacrifice the generality of the computing platform for the efficiency of increased specialization. But such specialization faces diminishing returns. So longer-term gains will require adopting wholly different hardware frameworksperhaps hardware that is based on analog, neuromorphic, optical, or quantum systems. Thus far, however, these wholly different hardware frameworks have yet to have much impact.

We must either adapt how we do deep learning or face a future of much slower progress.

Another approach to reducing the computational burden focuses on generating neural networks that, when implemented, are smaller. This tactic lowers the cost each time you use them, but it often increases the training cost (what we’ve described so far in this article). Which of these costs matters most depends on the situation. For a widely used model, running costs are the biggest component of the total sum invested. For other modelsfor example, those that frequently need to be retrained training costs may dominate. In either case, the total cost must be larger than just the training on its own. So if the training costs are too high, as we’ve shown, then the total costs will be, too.

And that’s the challenge with the various tactics that have been used to make implementation smaller: They don’t reduce training costs enough. For example, one allows for training a large network but penalizes complexity during training. Another involves training a large network and then “prunes” away unimportant connections. Yet another finds as efficient an architecture as possible by optimizing across many modelssomething called neural-architecture search. While each of these techniques can offer significant benefits for implementation, the effects on training are mutedcertainly not enough to address the concerns we see in our data. And in many cases they make the training costs higher.

One up-and-coming technique that could reduce training costs goes by the name meta-learning. The idea is that the system learns on a variety of data and then can be applied in many areas. For example, rather than building separate systems to recognize dogs in images, cats in images, and cars in images, a single system could be trained on all of them and used multiple times.

Unfortunately, recent work by
Andrei Barbu of MIT has revealed how hard meta-learning can be. He and his coauthors showed that even small differences between the original data and where you want to use it can severely degrade performance. They demonstrated that current image-recognition systems depend heavily on things like whether the object is photographed at a particular angle or in a particular pose. So even the simple task of recognizing the same objects in different poses causes the accuracy of the system to be nearly halved.

Benjamin Recht of the University of California, Berkeley, and others made this point even more starkly, showing that even with novel data sets purposely constructed to mimic the original training data, performance drops by more than 10 percent. If even small changes in data cause large performance drops, the data needed for a comprehensive meta-learning system might be enormous. So the great promise of meta-learning remains far from being realized.

Another possible strategy to evade the computational limits of deep learning would be to move to other, perhaps as-yet-undiscovered or underappreciated types of machine learning. As we described, machine-learning systems constructed around the insight of experts can be much more computationally efficient, but their performance can’t reach the same heights as deep-learning systems if those experts cannot distinguish all the contributing factors.
Neuro-symbolic methods and other techniques are being developed to combine the power of expert knowledge and reasoning with the flexibility often found in neural networks.

Like the situation that Rosenblatt faced at the dawn of neural networks, deep learning is today becoming constrained by the available computational tools. Faced with computational scaling that would be economically and environmentally ruinous, we must either adapt how we do deep learning or face a future of much slower progress. Clearly, adaptation is preferable. A clever breakthrough might find a way to make deep learning more efficient or computer hardware more powerful, which would allow us to continue to use these extraordinarily flexible models. If not, the pendulum will likely swing back toward relying more on experts to identify what needs to be learned.

From Your Site Articles

Related Articles Around the Web

Continue Reading

apple

Will iPhone 13 Trigger Headaches and Nausea?

Published

on

By

It’s an all-too-common ploy, and legitimate manufacturing companies and distributors suffer mightily as a result of it. But the danger runs much deeper than getting ripped off when you were seeking a bargain. When purchasing pharmaceuticals, for example, you’d be putting your health in jeopardy if you didn’t receive the bona fide medicine that was prescribed. Yet for much of the world,
getting duped in this way when purchasing medicine is sadly the norm. Even people in developed nations are susceptible to being treated with fake or substandard medicines.

Closeup of mechanical resonators.
Tiny mechanical resonators produced the same way microchips are made (bottom) can serve to authenticate various goods. Being less than 1 micrometer across and transparent, these tags are essentially invisible.
University of Florida

Counterfeit electronics are also a threat, because they can reduce the reliability of safety-critical systems and can make even ordinary consumer electronics dangerous.
Cellphones and e-cigarettes, for example, have been known to blow up in the user’s face because of the counterfeit batteries inside them.

It would be no exaggeration to liken the proliferation of counterfeit goods to an infection of the global economy systema pandemic of a different sort, one that has grown
100 fold over the past two decades, according to the International AntiCounterfeiting Coalition. So it’s no wonder that many people in industry have long been working on ways to battle this scourge.

The traditional strategy to thwart counterfeiters is to apply some sort of authentication marker to the genuine article. These efforts include the display of Universal Product Codes (UPC) and Quick Response (QR) patterns, and sometimes the inclusion of radio-frequency identification (RFID) tags. But UPC and QR codes must be apparent so that they are accessible for optical scanning. This makes them susceptible to removal, cloning, and reapplication to counterfeit products. RFID tags aren’t as easy to clone, but they typically require relatively large antennas, which makes it hard to label an item imperceptibly with them. And depending on what they are used for, they can be too costly.

We’ve come up with a different solution, one based on radio-frequency (RF) nanoelectromechanical systems (NEMS). Like RFID tags, our RF NEMS devices don’t have to be visible to be scanned. That, their tiny size, and the nature of their constituents, make these tags largely immune to physical tampering or cloning. And they cost just a few pennies each at most.

Unseen NEMS tags could become a powerful weapon in the global battle against counterfeit products, even counterfeit bills. Intrigued? Here’s a description of the physical principles on which these devices are based and a brief overview of what would be involved in their production and operation.

You can think of an RF NEMS tag as a tiny sandwich. The slices of bread are two 50-nanometer-thick conductive layers of indium tin oxide, a material commonly used to make transparent electrodes, such as those for the touch screen on your phone. The filling is a 100-nm-thick piezoelectric film composed of a scandium-doped aluminum nitride, which is similarly transparent. With lithographic techniques similar to those used to fabricate integrated circuits, we etch a pattern in the sandwich that includes a ring in the middle suspended by four slender arms. That design leaves the circular surface free to vibrate.

The material making up the piezoelectric film is, of course, subject to the
piezoelectric effect: When mechanically deformed, the material generates an electric voltage across it. More important here is that such materials also experience what is known as the converse piezoelectric effectan applied voltage induces mechanical deformation. We take advantage of that phenomenon to induce oscillations in the flexible part of the tag.

To accomplish this, we use lithography to fabricate a coil on the perimeter of the tag. This coil is connected at one end to the top conductive layer and on the other end to the bottom conductive layer. Subjecting the tag to an oscillating magnetic field creates an oscillating voltage across the piezoelectric layer, as dictated by
Faraday’s law of electromagnetic induction. The resulting mechanical deformation of the piezo film in turn causes the flexible parts of the tag to vibrate.

This vibration will become most intense when the frequency of excitation matches the natural frequency of the tiny mechanical oscillator. This is simple resonance, the phenomenon that allows an opera singer’s voice to shatter a wine glass when the right note is hit (and if the singer
tries really, really hard). It’s also what famously triggered the collapse of the Broughton suspension bridge near Manchester, England, in 1831, when 74 members of the 60th Rifle Corps marched across it with their footsteps landing in time with the natural mechanical resonance of the bridge. (After that incident, British soldiers were instructed to break step when they marched across bridges!) In our case, the relevant excitation is the oscillation of the magnetic field applied by a scanner, which induces the highest amplitude vibration when it matches the frequency of mechanical resonance of the flexible part of the tag.

In truth, the situation is more complicated than this. The flexible portion of the tag doesn’t have just one resonant frequencyit has many. It’s like the membrane on a drum, which can
oscillate in various ways. The left side might go up as the right side goes down, and vice versa. Or the middle might be rising as the perimeter shifts downward. Indeed, there are all sorts of ways that the membrane of a drum deforms when it is struck. And each of those oscillation patterns has its own resonant frequency.

We designed our nanometer-scale tags to vibrate like tiny drumheads, with many possible modes of oscillation. The tags are so tinyjust a few micrometers acrossthat their vibrations take place at radio frequencies in the range of 80 to 90 megahertz. At this scale, more than the geometry of the tag matters: the vagaries of manufacturing also come into play.

For example, the thickness of the sandwich, which is nominally around 200 nm, will vary slightly from place to place. The diameter or the circularity of the ring-shaped portion is also not going to be identical from sample to sample. These subtle manufacturing variations will affect the mechanical properties of the device, including its resonant frequencies.

In addition, at this scale the materials used to make the device are not perfectly homogeneous. In particular, in the piezoelectric layer there are intrinsic variations in the crystal structure. Because of the ample amount of scandium doping, conical clusters of cubic crystals form randomly within the matrix of hexagonal crystals that make up the aluminum nitride grains. The random positioning of those tiny cones creates significant differences in the resonances that arise in seemingly identical tags.

Random variations like these can give rise to troublesome defects in the manufacture of some microelectronic devices. Here, though, random variation is not a bugit’s a feature! It allows each tag that is fabricated to serve as a unique marker. That is, while the resonances exhibited by a tag are controlled in a general way by its geometry, the exact frequencies, amplitudes, and sharpness of each of its resonances are the result of random variations. That makes each of these items unique and prevents a tag from being cloned, counterfeited, or otherwise manufactured in a way that would reproduce all the properties of the resonances seen in the original.

An RF NEMS tag is an example of what security experts call a
physical unclonable function. For discretely labeling something like a batch of medicine to document its provenance and prove its authenticity, it’s just what the doctor ordered.

You might be wondering at this point how we can detect and characterize the unique characteristics of the oscillations taking place within these tiny tags. One way, in principle, would be to put the device under a vibrometer microscope and look at it move. While that’s possibleand we’ve done it in the course of our laboratory studiesthis strategy wouldn’t be practical or effective in commercial applications.

But it turns out that measuring the resonances of these tags isn’t at all difficult. That’s because the electronic scanner that excites vibrations in the tag has to supply the energy that maintains those vibrations. And it’s straightforward for the electronic scanner to determine the frequencies at which energy is being sapped in this way.

The scanner we are using at the moment is just a standard piece of electronic test equipment called a network analyzer. (The word
network here refers to the network of electrical componentsresistors, and capacitors, and inductorsin the circuit being tested, not to a computer network like the Internet.) The sensor we attach to the network analyzer is just a tiny coil, which is positioned within a couple of millimeters of the tag.

With this gear, we can readily measure the unique resonances of an individual tag. We record that signature by measuring how much the various resonant-frequency peaks are offset from those of an ideal tag of the relevant geometry. We translate each of those frequency offsets into a binary number and string all those bits together to construct a digital signature unique to each tag. The scheme that we are currently using produces 31-bit-long identifiers, which means that more than 2 billion different binary signatures are possibleenough to uniquely tag just about any product you can think of that might need to be authenticated.

Relying on subtle physical properties of a tag to define its unique signature prevents cloning but it does raise a different concern: Those properties could change.

For example, in a humid environment, a tag might adsorb some moisture from the air, which would change the properties of its resonances. That possibility is easy enough to protect against by covering the tag with a thin protective layer, say of some transparent polymer, which can be done without interfering with the tag’s vibrations.

But we also need to recognize that the frequencies of its resonances will vary as the tag changes temperature. We can get around that complication, though. Instead of characterizing a tag according to the absolute frequency of its oscillation modes, we instead measure the relationships between the frequencies of different resonances, which all shift in frequency by similar relative amounts when the temperature of the tag changes. This procedure ensures that the measured characteristics will translate to the same 31-bit number, whether the tag is hot or cold. We’ve tested this strategy over quite a large temperature range (from 0 to 200 C.) and have found it to be quite robust.

 tag is characterized by the differences between its measured resonant frequencies (dips in red line) and the corresponding frequencies for an ideal tag (dips in black line).
A tag is characterized by the differences between its measured resonant frequencies (dips in red line) and the corresponding frequencies for an ideal tag (dips in black line). These differences are encoded as short binary strings, padded to a standard length, with one bit signifying whether the frequency offset of positive or negative (right). Concatenated, these strings provide a unique digital fingerprint for the tag (bottom)
University of Florida

The RF network analyzer we’re using as a scanner is a pricey piece of equipment, and the tiny coil sensor attached to it needs to be placed right up against the tag. While in some applications the location of the tag on the product could be standardized (say, for authenticating credit cards), in other situations the person scanning a product might have no idea where on the item the tag is positioned. So we are working now to create a smaller, cheaper scanning unit, one with a sensor that doesn’t have to be positioned right on top of the tag.

We are also exploring the feasibility of modifying the resonances of a tag
after it is fabricated. That possibility arises from a bit of serendipity in our research. You see, the material we chose for the piezoelectric layer in our tags is kind of unusual. Piezoelectric devices, like some of the filters in our cellphones, are commonly made from aluminum nitride. But the material we adopted includes large amounts of scandium dopant, which enhances its piezoelectric properties.

Unknown to us when we decided to use this more exotic formulation was a second quality it imparts: It makes the material into a
ferroelectric, meaning that it can be electrically polarized by applying a voltage to it, and that polarization remains even after the applied voltage is removed. That’s relevant to our application, because the polarization of the material influences its electrical and mechanical properties. Imparting a particular polarization pattern on a tag, which could be done after it is manufactured, would alter the frequencies of its resonances and their relative amplitudes. This approach offers a strategy by which low-volume manufacturers, or even end users, could burn” a signature into these tags.

Our research on RF NEMS tags has been funded in part by Discover Financial Services, the company behind the popular Discover credit card. But the applications of the tiny tags we’ve been working on will surely be of interest to many other types of companies as well. Even governments might one day adopt nanomechanical tags to authenticate paper money.

Just how broadly useful these tags will be depends, of course, on how successful we are in engineering a handheld scannerwhich might even be a simple add-on for a smartphoneand whether our surmise is correct that these tags can be customized after manufacture. But we are certainly excited to be exploring all these possibilities as we take our first tentative steps toward commercialization of a technology that might one day help to stymie the world’s most widespread form of criminal activity.

This article appears in the June 2021 print issue as The Hidden Authenticators.”

Continue Reading

Trending

%d bloggers like this: