Original Content podcast: The disappointment of ‘House of Cards’ and its final season

It seems like Netflix’s “House of Cards” had a real opportunity for a fresh start with season six.

Granted, the behind-the-scenes turmoil probably made this season particularly challenging: Production was already underway when “Star Trek: Discovery” actor Anthony Rapp came forward with allegations that Kevin Spacey made a sexual advance towards him when Rapp was only 14. In response, Netflix and production company Media Rights Capital halted production and ultimately decided to rewrite the season without Spacey’s character Frank Underwood.

If you’ve watched “House of Cards,” you know that this must have been a big change, since Underwood and his political schemes have been at the center of the show for five years. Still, the previous season ended with Robin Wright’s Claire Underwood taking over the presidency, so it seemed like the right time to rethink this as a show that’s centered on Claire.

What we got, however, was a season that’s still very much about Frank Underwood. Sure, he’s died offscreen before the season starts, and Spacey never appears in these new episodes. But he still casts a long shadow over the show, with all of the characters focused on the mystery of his death and the power vacuum he left behind. On the latest episode of the Original Content podcast, we try to explain why we found this approach so unsatisfying.

In addition, we talk about the death of comics legend Stan Lee and Hulu’s plans to create multiple series based on “Wild Cards,” a set of superhero stories edited by George R.R. Martin. This, in turn, leads us to the question on every “Song of Ice and Fire” fan’s mind: When is he going to finish the next book?

You can listen in the player below, subscribe using Apple Podcasts or find us in your podcast player of choice. If you like the show, please let us know by leaving a review on Apple. You also can send us feedback directly. (Or suggest shows and movies for us to review!)

via Click on the link for the full article

Original Content podcast: The disappointment of ‘House of Cards’ and its final season

It seems like Netflix’s “House of Cards” had a real opportunity for a fresh start with season six.

Granted, the behind-the-scenes turmoil probably made this season particularly challenging: Production was already underway when “Star Trek: Discovery” actor Anthony Rapp came forward with allegations that Kevin Spacey made a sexual advance towards him when Rapp was only 14. In response, Netflix and production company Media Rights Capital halted production and ultimately decided to rewrite the season without Spacey’s character Frank Underwood.

If you’ve watched “House of Cards,” you know that this must have been a big change, since Underwood and his political schemes have been at the center of the show for five years. Still, the previous season ended with Robin Wright’s Claire Underwood taking over the presidency, so it seemed like the right time to rethink this as a show that’s centered on Claire.

What we got, however, was a season that’s still very much about Frank Underwood. Sure, he’s died offscreen before the season starts, and Spacey never appears in these new episodes. But he still casts a long shadow over the show, with all of the characters focused on the mystery of his death and the power vacuum he left behind. On the latest episode of the Original Content podcast, we try to explain why we found this approach so unsatisfying.

In addition, we talk about the death of comics legend Stan Lee and Hulu’s plans to create multiple series based on “Wild Cards,” a set of superhero stories edited by George R.R. Martin. This, in turn, leads us to the question on every “Song of Ice and Fire” fan’s mind: When is he going to finish the next book?

You can listen in the player below, subscribe using Apple Podcasts or find us in your podcast player of choice. If you like the show, please let us know by leaving a review on Apple. You also can send us feedback directly. (Or suggest shows and movies for us to review!)

via Click on the link for the full article

Gillmor Gang: Nation State

The Gillmor Gang — Keith Teare, Esteban Kolsky, Frank Radice, Michael Markman, and Steve Gillmor . Recorded live Saturday November 17, 2018. Democracy saved, fiat currency, and Facebook rope-a-dope.

Produced and directed by Tina Chase Gillmor @tinagillmor

@kteare, @ekolsky, @fradice, @mickeleh, @stevegillmor

Liner Notes

Live chat stream

The Gillmor Gang on Facebook

via Click on the link for the full article

Google looks to former Oracle exec Thomas Kurian to move cloud business along

Diane Greene announced on Friday that she was stepping down after three years running Google’s cloud business. She will stay on until the first of the year to help her successor, Thomas Kurian in the transition. He left Oracle at the end of September after more than 20 years with the company, and is charged with making Google’s cloud division more enterprise-friendly, a goal that has oddly eluded the company.

Greene was brought on board in 2015 to bring some order and enterprise savvy to the company’s cloud business. While she did help move them along that path, and grew the cloud business, it simply hasn’t been enough. There have been rumblings for months that Greene’s time was coming to an end.

So the torch is being passed to Kurian, a man who spent over two decades at a company that might be the exact opposite of Google. He ran product at Oracle, a traditional enterprise software company. Oracle itself has struggled to make the transition to a cloud company, but Bloomberg reported in September that one of the reasons Kurian was taking a leave of absence at the time was a difference of opinion with Chairman Larry Ellison over cloud strategy. According to the report, Kurian wanted to make Oracle’s software available on public clouds like AWS and Azure (and Google Cloud). Ellison apparently didn’t agree and a couple of weeks later Kurian announced he was moving on.

Even though Kurian’s background might not seem to be perfectly aligned with Google, it’s important to keep in mind that his thinking was evolving. He was also in charge of thousands of products and helped champion Oracle’s move to the cloud. He has experience successfully nurturing products enterprises have wanted, and perhaps that’s the kind of knowledge Google was looking for in its next cloud leader.

Ray Wang, founder and principal analyst at Constellation Research says Google still needs to learn to support the enterprise, and he believes Kurian is the right person to help the company get there. “Kurian knows what’s required to make a cloud company work for enterprise customers,” Wang said.

If he’s right, perhaps an old-school enterprise executive is just what Google requires to turn its Cloud division into an enterprise-friendly powerhouse. Greene has always maintained that it was still early days for the cloud and Google had plenty of time to capture part of the untapped market, a point she reiterated in her blog post on Friday. “The cloud space is early and there is an enormous opportunity ahead,” she wrote.

She may be right about that, but marketshare positions seem to be hardening. AWS, which was first to market, has an enormous marketshare lead with over 30 percent by most accounts. Microsoft is the only company with the market strength at the moment to give them a run for their money and the only other company with double digit market share numbers. In fact, Amazon has a larger marketshare than the next four companies combined, according to data from Synergy Research.

While Google is always mentioned in the Big 3 cloud companies with AWS and Microsoft, with around $4 billion revenue a year, it has a long way to go to get to the level of these other companies. Despite Greene’s assertions, time could be running out to make a run. Perhaps Kurian is the person to push the company to grab some of that untapped market as companies move more workloads to the cloud. At this point, Google is counting on him to do just that.

via Click on the link for the full article

The slow corrosion of techno-optimism

Two weeks from now, the Swahilipot Hub, a hackerspace / makerspace / center for techies and artists in Mombasa, Kenya, is hosting a Pwani Innovation Week, “to stimulate the innovation ecosystem in the Pwani Region.” Some of its organizers showed me around Mombasa’s cable landing site some years ago; they’re impressive people. The idea of the Hub and its forthcoming event fills me with unleavened enthusiasm, and optimism … and a bleak realization that it’s been a while since I’ve felt this way about a tech initiative.

What happened? How did we go from predictions that the tech industry would replace the hidebound status quo with a new democratized openness, power to the people, now that we all carry a networked supercomputer in our pocket … to widespread, metastasizing accusations of abuse of power? To cite just a few recent examples: Facebook being associated with genocide and weaponized disinformation; Google with sexual harassment and nonconsensual use of patients’ medical data; and Amazon’s search for a new headquarters called “shameful — it should be illegal” by The Atlantic.

To an extent some of this was inevitable. The more powerful you become, the less publicly acceptable it is to throw your increasing weight around like Amazon has done. I’m sure that to Google, subsuming DeepMind is a natural, inevitable corporate progression, a mere structural reshuffling, and it’s not their fault that the medical providers they’re working with never got explicit consent from their patients to share the provided data. Facebook didn’t know it was going to be a breeding ground for massive disinformation campaigns; it was, and remains, a colossal social experiment in which we are all participating, despite the growing impression that its negatives may outweigh its positives. And at both the individual and corporate levels, as a company grows more powerful, “power corrupts” remains an inescapable truism.

But let’s not kid ourselves. There’s more going on here than mischance and the natural side effects of growth, and this is particularly true for Facebook and Twitter. When we talk about loss of faith in tech, most of the time, I think, we mean loss of faith in social media. It’s true that we don’t want them to become censors. The problem is that they already are, as a side effect, via their algorithms which show posts and tweets with high “engagement” — i.e. how vehemently users respond. The de facto outcome is to amplify outrage, and hence disinformation.

It may well be true, in a neutral environment, that the best answer to bad speech is more speech. The problem is that Facebook and Twitter are anything but neutral environments. Their optimization for “engagement” is a Brobdingnagian thumb on their scales, tilting their playing fields into whole Himalayas of advantages for bad faith, misinformation, disinformation, outrage and hate.

This optimization isn’t even necessary for their businesses to be somewhat successful. In 2014, Twitter had a strict chronological timeline, and recorded a $100 million profit before stock-based compensation — with relatively primitive advertising infrastructure, compared to today. Twitter and Facebook could kill the disinformation problem tomorrow, with ease, by switching from an algorithmic, engagement-based timeline back to a strict chronological one.

Never going to happen, of course. It would hurt their profits and their stock price too much. Just like Google was never going to consider itself bound to DeepMind’s cofounder’s assurance two years ago that “DeepMind operates autonomously from Google.” Just like Amazon was never going to consider whether siphoning money from local governments at its new so-called “co-headquarters” was actually going to be good for its new homes. Because while technology has benefited individuals, enormously, it’s really benefited technology’s megacorporations, and they’re going to follow their incentives, not ours.

Mark Zuckerberg’s latest post begins: “Many of us got into technology because we believe it can be a democratizing force for putting power in people’s hands.” I agree with that statement. Many of us did. But, looking back, were we correct? Is it really what the available evidence show us? Has it, perhaps, put some power in people’s hands — but delivered substantially more to corporations and governments?

I fear that the available evidence seems to confirm, instead, the words of tech philosopher-king Maciej Ceglowski. His most relevant rant begins with a much simpler, punchier phrase: “Technology concentrates power.” Today it seems harder than ever to argue with that.

via Click on the link for the full article

The slow corrosion of techno-optimism

Two weeks from now, the Swahilipot Hub, a hackerspace / makerspace / center for techies and artists in Mombasa, Kenya, is hosting a Pwani Innovation Week, “to stimulate the innovation ecosystem in the Pwani Region.” Some of its organizers showed me around Mombasa’s cable landing site some years ago; they’re impressive people. The idea of the Hub and its forthcoming event fills me with unleavened enthusiasm, and optimism … and a bleak realization that it’s been a while since I’ve felt this way about a tech initiative.

What happened? How did we go from predictions that the tech industry would replace the hidebound status quo with a new democratized openness, power to the people, now that we all carry a networked supercomputer in our pocket … to widespread, metastasizing accusations of abuse of power? To cite just a few recent examples: Facebook being associated with genocide and weaponized disinformation; Google with sexual harassment and nonconsensual use of patients’ medical data; and Amazon’s search for a new headquarters called “shameful — it should be illegal” by The Atlantic.

To an extent some of this was inevitable. The more powerful you become, the less publicly acceptable it is to throw your increasing weight around like Amazon has done. I’m sure that to Google, subsuming DeepMind is a natural, inevitable corporate progression, a mere structural reshuffling, and it’s not their fault that the medical providers they’re working with never got explicit consent from their patients to share the provided data. Facebook didn’t know it was going to be a breeding ground for massive disinformation campaigns; it was, and remains, a colossal social experiment in which we are all participating, despite the growing impression that its negatives may outweigh its positives. And at both the individual and corporate levels, as a company grows more powerful, “power corrupts” remains an inescapable truism.

But let’s not kid ourselves. There’s more going on here than mischance and the natural side effects of growth, and this is particularly true for Facebook and Twitter. When we talk about loss of faith in tech, most of the time, I think, we mean loss of faith in social media. It’s true that we don’t want them to become censors. The problem is that they already are, as a side effect, via their algorithms which show posts and tweets with high “engagement” — i.e. how vehemently users respond. The de facto outcome is to amplify outrage, and hence disinformation.

It may well be true, in a neutral environment, that the best answer to bad speech is more speech. The problem is that Facebook and Twitter are anything but neutral environments. Their optimization for “engagement” is a Brobdingnagian thumb on their scales, tilting their playing fields into whole Himalayas of advantages for bad faith, misinformation, disinformation, outrage and hate.

This optimization isn’t even necessary for their businesses to be somewhat successful. In 2014, Twitter had a strict chronological timeline, and recorded a $100 million profit before stock-based compensation — with relatively primitive advertising infrastructure, compared to today. Twitter and Facebook could kill the disinformation problem tomorrow, with ease, by switching from an algorithmic, engagement-based timeline back to a strict chronological one.

Never going to happen, of course. It would hurt their profits and their stock price too much. Just like Google was never going to consider itself bound to DeepMind’s cofounder’s assurance two years ago that “DeepMind operates autonomously from Google.” Just like Amazon was never going to consider whether siphoning money from local governments at its new so-called “co-headquarters” was actually going to be good for its new homes. Because while technology has benefited individuals, enormously, it’s really benefited technology’s megacorporations, and they’re going to follow their incentives, not ours.

Mark Zuckerberg’s latest post begins: “Many of us got into technology because we believe it can be a democratizing force for putting power in people’s hands.” I agree with that statement. Many of us did. But, looking back, were we correct? Is it really what the available evidence show us? Has it, perhaps, put some power in people’s hands — but delivered substantially more to corporations and governments?

I fear that the available evidence seems to confirm, instead, the words of tech philosopher-king Maciej Ceglowski. His most relevant rant begins with a much simpler, punchier phrase: “Technology concentrates power.” Today it seems harder than ever to argue with that.

via Click on the link for the full article

Vision Direct reveals breach that skimmed customer credit cards

European online contact lens supplier Vision Direct has revealed a data breach which compromised full credit card details for a number of its customers, as well as personal information.

Compromised data includes full name, billing address, email address, password, telephone number and payment card information, including card number, expiry date and CVV.

It’s not yet clear how many of Vision Direct’s customers are affected — we’ve reached out to the company with questions.

Detailing the data theft in a post on its website Vision Direct writes that customer data was compromised between 12.11am GMT November 3, 2018 and 12.52pm GMT November 8 — with any logged in users who were ordering or updating their information on visionDirect.co.uk in that time window potentially being affected.

It says it has emailed customers to notify them of the data theft.

“This data was compromised when entering data on the website and not from the Vision Direct database,” the company writes on its website. “The breach has been resolved and our website is working normally.”

“We advise any customers who believe they may have been affected to contact their banks or credit card providers and follow their advice,” it adds.

Affected payment methods include Visa, Mastercard and Maestro — but not PayPal (although Vision Direct says PayPal users’ personal data may still have been swiped).

It claims existing personal data previously stored in its database was not affected by the breach — writing that the theft “only impacted new information added or updated on the VisionDirect.co.uk website” (and only during the aforementioned time window).

“All payment card data is stored with our payment providers and so stored payment card information was not affected by the breach,” it adds.

Data appears to have been compromised via a Javascript keylogger running on the Vision Direct website, according to security researcher chatter on Twitter.

After the breach was made public, security researcher Troy Mursch quickly found a fake Google Analytics script had been running on Vision Direct’s UK website:

The malicious script also looks to have affected additional Vision Direct domains in Europe; and users of additional ecommerce sites (at least one of which they found still running the fake script)…

Another security researcher, Willem de Groot, picked up on the scam in September, writing in a blog post then that: “The domain g-analytics.com is not owned by Google, as opposed to its legitimate google-analytics.com counterpart. The fraud is hosted on a dodgy Russian/Romanian/Dutch/Dubai network called HostSailor.”

He also found the malware had “spread to various websites”, saying its creator had crafted “14 different copies over the course of 3 weeks”, and tailored some versions to include a fake payment popup form “that was built for a specific website”.

“These instances are still harvesting passwords and identities as of today,” de Groot warned about two months before Vision Direct got breached.

via Click on the link for the full article

Vision Direct reveals breach that skimmed customer credit cards

European online contact lens supplier Vision Direct has revealed a data breach which compromised full credit card details for a number of its customers, as well as personal information.

Compromised data includes full name, billing address, email address, password, telephone number and payment card information, including card number, expiry date and CVV.

It’s not yet clear how many of Vision Direct’s customers are affected — we’ve reached out to the company with questions.

Detailing the data theft in a post on its website Vision Direct writes that customer data was compromised between 12.11am GMT November 3, 2018 and 12.52pm GMT November 8 — with any logged in users who were ordering or updating their information on visionDirect.co.uk in that time window potentially being affected.

It says it has emailed customers to notify them of the data theft.

“This data was compromised when entering data on the website and not from the Vision Direct database,” the company writes on its website. “The breach has been resolved and our website is working normally.”

“We advise any customers who believe they may have been affected to contact their banks or credit card providers and follow their advice,” it adds.

Affected payment methods include Visa, Mastercard and Maestro — but not PayPal (although Vision Direct says PayPal users’ personal data may still have been swiped).

It claims existing personal data previously stored in its database was not affected by the breach — writing that the theft “only impacted new information added or updated on the VisionDirect.co.uk website” (and only during the aforementioned time window).

“All payment card data is stored with our payment providers and so stored payment card information was not affected by the breach,” it adds.

Data appears to have been compromised via a Javascript keylogger running on the Vision Direct website, according to security researcher chatter on Twitter.

After the breach was made public, security researcher Troy Mursch quickly found a fake Google Analytics script had been running on Vision Direct’s UK website:

The malicious script also looks to have affected additional Vision Direct domains in Europe; and users of additional ecommerce sites (at least one of which they found still running the fake script)…

Another security researcher, Willem de Groot, picked up on the scam in September, writing in a blog post then that: “The domain g-analytics.com is not owned by Google, as opposed to its legitimate google-analytics.com counterpart. The fraud is hosted on a dodgy Russian/Romanian/Dutch/Dubai network called HostSailor.”

He also found the malware had “spread to various websites”, saying its creator had crafted “14 different copies over the course of 3 weeks”, and tailored some versions to include a fake payment popup form “that was built for a specific website”.

“These instances are still harvesting passwords and identities as of today,” de Groot warned about two months before Vision Direct got breached.

via Click on the link for the full article

Cities that didn’t win HQ2 shouldn’t be counted out

The more than year-long dance between cities and Amazon for its second headquarters is finally over, with New York City and Washington, DC, capturing the big prize. With one of the largest economic development windfalls in a generation on the line, 238 cities used every tactic in the book to court the company – including offering to rename a city “Amazon” and appointing Jeff Bezos “mayor for life.”

Now that the process, and hysteria, are over, and cities have stopped asking “how can we get Amazon,” we’d like to ask a different question: How can cities build stronger start-up ecosystems for the Amazon yet to be built?

In September 2017, Amazon announced that it would seek a second headquarters. But rather than being the typical site selection process, this would become a highly publicized Hunger Games-esque scenario.

An RFP was proffered on what the company sought, and it included everything any good urbanist would want, with walkability, transportation and cultural characteristics on the docket. But of course, incentives were also high on the list.

Amazon could have been a transformational catalyst for a plethora of cities throughout the US, but instead, it chose two superstar cities: the number one and five metro areas by GDP which, combined, amounts to a nearly $2 trillion GDP. These two metro areas also have some of the highest real estate prices in the country, a swath of high paying jobs and of course power — financial and political — close at hand.

Perhaps the take-away for cities isn’t that we should all be so focused on hooking that big fish from afar, but instead that we should be growing it in our own waters. Amazon itself is a great example of this. It’s worth remembering that over the course of a quarter century, Amazon went from a garage in Seattle’s suburbs to consuming 16 percent — or 81 million square feet — of the city’s downtown. On the other end of the spectrum, the largest global technology company in 1994 (the year of Amazon’s birth) was Netscape, which no longer exists.

The upshot is that cities that rely only on attracting massive technology companies are usually too late.

At the National League of Cities, we think there are ways to expand the pie that don’t reinforce existing spatial inequalities. This is exactly the idea behind the launch of our city innovation ecosystems commitments process. With support from the Schmidt Futures Foundation, fifty cities, ranging from rural townships, college towns, and major metros, have joined with over 200 local partners and leveraged over $100 million in regional and national resources to support young businesses, leverage technology and expand STEM education and workforce training for all.

The investments these cities are making today may in fact be the precursor to some of the largest tech companies of the future.

With that idea in mind, here are eight cities that didn’t win HQ2 bids but are ensuring their cities will be prepared to create the next tranche of high-growth startups. 

Austin

Austin just built a medical school adjacent to a tier one research university, the University of Texas. It’s the first such project to be completed in America in over fifty years. To ensure the addition translates into economic opportunity for the city, Austin’s public, private and civic leaders have come together to create Capital City Innovation to launch the city’s first Innovation District at the new medical school. This will help expand the city’s already world class startup ecosystem into the health and wellness markets.

Baltimore

Baltimore is home to over $2 billion in academic research, ranking it third in the nation behind Boston and Philadelphia. In order to ensure everyone participates in the expanding research-based startup ecosystem, the city is transforming community recreation centers into maker and technology training centers to connect disadvantaged youth and families to new skills and careers in technology. The Rec-to-Tech Initiative will begin with community design sessions at four recreation centers, in partnership with the Digital Harbor Foundation, to create a feasibility study and implementation plan to review for further expansion.

Buffalo

The 120-acre Buffalo Niagara Medical Center (BNMC) is home to eight academic institutions and hospitals and over 150 private technology and health companies. To ensure Buffalo’s startups reflect the diversity of its population, the Innovation Center at BNMC has just announced a new program to provide free space and mentorship to 10 high potential minority- and/or women-owned start-ups.

Denver

Like Seattle, real estate development in Denver is growing at a feverish rate. And while the growth is bringing new opportunity, the city is expanding faster than the workforce can keep pace. To ensure a sustainable growth trajectory, Denver has recruited the Next Generation City Builders to train students and retrain existing workers to fill high-demand jobs in architecture, design, construction and transportation. 

Providence

With a population of 180,000, Providence is home to eight higher education institutions – including Brown University and the Rhode Island School of Design – making it a hub for both technical and creative talent. The city of Providence, in collaboration with its higher education institutions and two hospital systems, has created a new public-private-university partnership, the Urban Innovation Partnership, to collectively contribute and support the city’s growing innovation economy. 

Pittsburgh

Pittsburgh may have once been known as a steel town, but today it is a global mecca for robotics research, with over 4.5 times the national average robotics R&D within its borders. Like Baltimore, Pittsburgh is creating a more inclusive innovation economy through a Rec-to-Tech program that will re-invest in the city’s 10 recreational centers, connecting students and parents to the skills needed to participate in the economy of the future. 

Tampa

Tampa is already home to 30,000 technical and scientific consultant and computer design jobs — and that number is growing. To meet future demand and ensure the region has an inclusive growth strategy, the city of Tampa, with 13 university, civic and private sector partners, has announced “Future Innovators of Tampa Bay.” The new six-year initiative seeks to provide the opportunity for every one of the Tampa Bay Region’s 600,000 K-12 students to be trained in digital creativity, invention and entrepreneurship.

These eight cities help demonstrate the innovation we are seeing on the ground now, all throughout the country. The seeds of success have been planted with people, partnerships and public leadership at the fore. Perhaps they didn’t land HQ2 this time, but when we fast forward to 2038 — and the search for Argo AISparkCognition or Welltok’s new headquarters is well underway — the groundwork will have been laid for cities with strong ecosystems already in place to compete on an even playing field.

via Click on the link for the full article

Quantum computing, not AI, will define our future

The word “quantum” gained currency in the late 20th century as a descriptor signifying something so significant, it defied the use of common adjectives. For example, a “quantum leap” is a dramatic advancement (also an early ’90’s television series starring Scott Bakula).

At best, that is an imprecise (though entertaining) definition. When “quantum” is applied to “computing,” however, we are indeed entering an era of dramatic advancement.

Quantum computing is technology based on the principles of quantum theory, which explains the nature of energy and matter on the atomic and subatomic level. It relies on the existence of mind-bending quantum-mechanical phenomena, such as superposition and entanglement.

Erwin Schrödinger’s famous 1930’s thought experiment involving a cat that was both dead and alive at the same time was intended to highlight the apparent absurdity of superposition, the principle that quantum systems can exist in multiple states simultaneously until observed or measured. Today quantum computers contain dozens of qubits (quantum bits), which take advantage of that very principle. Each qubit exists in a superposition of zero and one (i.e., has non-zero probabilities to be a zero or a one) until measured. The development of qubits has implications for dealing with massive amounts of data and achieving previously unattainable level of computing efficiency that are the tantalizing potential of quantum computing.

While Schrödinger was thinking about zombie cats, Albert Einstein was observing what he described as “spooky action at a distance,” particles that seemed to be communicating faster than the speed of light. What he was seeing were entangled electrons in action. Entanglement refers to the observation that the state of particles from the same quantum system cannot be described independently of each other. Even when they are separated by great distances, they are still part of the same system. If you measure one particle, the rest seem to know instantly. The current record distance for measuring entangled particles is 1,200 kilometers or about 745.6 miles. Entanglement means that the whole quantum system is greater than the sum of its parts.

If these phenomena make you vaguely uncomfortable so far, perhaps I can assuage that feeling simply by quoting Schrödinger, who purportedly said after his development of quantum theory, “I don’t like it, and I’m sorry I ever had anything to do with it.”

Various parties are taking different approaches to quantum computing, so a single explanation of how it works would be subjective. But one principle may help readers get their arms around the difference between classical computing and quantum computing. Classical computers are binary. That is, they depend on the fact that every bit can exist only in one of two states, either 0 or 1. Schrödinger’s cat merely illustrated that subatomic particles could exhibit innumerable states at the same time. If you envision a sphere, a binary state would be if the “north pole,” say, was 0, and the south pole was 1. In a qubit, the entire sphere can hold innumerable other states and relating those states between qubits enables certain correlations that make quantum computing well-suited for a variety of specific tasks that classical computing cannot accomplish. Creating qubits and maintaining their existence long enough to accomplish quantum computing tasks is an ongoing challenge.

IBM researcher Jerry Chow in the quantum computing lab at IBM’s T.J. Watson Research Center.

Humanizing Quantum Computing

These are just the beginnings of the strange world of quantum mechanics. Personally, I’m enthralled by quantum computing. It fascinates me on many levels, from its technical arcana to its potential applications that could benefit humanity. But a qubit’s worth of witty obfuscation on how quantum computing works will have to suffice for now. Let’s move on to how it will help us create a better world.

Quantum computing’s purpose is to aid and extend the abilities of classical computing. Quantum computers will perform certain tasks much more efficiently than classical computers, providing us with a new tool for specific applications. Quantum computers will not replace their classical counterparts. In fact, quantum computers require classical computer to support their specialized abilities, such as systems optimization.

Quantum computers will be useful in advancing solutions to challenges in diverse fields such as energy, finance, healthcare, aerospace, among others. Their capabilities will help us cure diseases, improve global financial markets, detangle traffic, combat climate change, and more. For instance, quantum computing has the potential to speed up pharmaceutical discovery and development, and to improve the accuracy of the atmospheric models used to track and explain climate change and its adverse effects.

I call this “humanizing” quantum computing, because such a powerful new technology should be used to benefit humanity, or we’re missing the boat.

Intel’s 17-qubit superconducting test chip for quantum computing has unique features for improved connectivity and better electrical and thermo-mechanical performance. (Credit: Intel Corporation)

An Uptick in Investments, Patents, Startups, and more

That’s my inner evangelist speaking. In factual terms, the latest verifiable, global figures for investment and patent applications reflect an uptick in both areas, a trend that’s likely to continue. Going into 2015, non-classified national investments in quantum computing reflected an aggregate global spend of about $1.75 billion USD,according to The Economist. The European Union led with $643 million. The U.S. was the top individual nation with $421 million invested, followed by China ($257 million), Germany ($140 million), Britain ($123 million) and Canada ($117 million). Twenty countries have invested at least $10 million in quantum computing research.

At the same time, according to a patent search enabled by Thomson Innovation, the U.S. led in quantum computing-related patent applications with 295, followed by Canada (79), Japan (78), Great Britain (36), and China (29). The number of patent families related to quantum computing was projected to increase 430 percent by the end of 2017

The upshot is that nations, giant tech firms, universities, and start-ups are exploring quantum computing and its range of potential applications. Some parties (e.g., nation states) are pursuing quantum computing for security and competitive reasons. It’s been said that quantum computers will break current encryption schemes, kill blockchain, and serve other dark purposes.

I reject that proprietary, cutthroat approach. It’s clear to me that quantum computing can serve the greater good through an open-source, collaborative research and development approach that I believe will prevail once wider access to this technology is available. I’m confident crowd-sourcing quantum computing applications for the greater good will win.

If you want to get involved, check out the free tools that the household-name computing giants such as IBM and Google have made available, as well as the open-source offerings out there from giants and start-ups alike. Actual time on a quantum computer is available today, and access opportunities will only expand.

In keeping with my view that proprietary solutions will succumb to open-source, collaborative R&D and universal quantum computing value propositions, allow me to point out that several dozen start-ups in North America alone have jumped into the QC ecosystem along with governments and academia. Names such as Rigetti Computing, D-Wave Systems, 1Qbit Information Technologies, Inc., Quantum Circuits, Inc., QC Ware, Zapata Computing, Inc. may become well-known or they may become subsumed by bigger players, their burn rate – anything is possible in this nascent field.

Developing Quantum Computing Standards

 Another way to get involved is to join the effort to develop quantum computing-related standards. Technical standards ultimately speed the development of a technology, introduce economies of scale, and grow markets. Quantum computer hardware and software development will benefit from a common nomenclature, for instance, and agreed-upon metrics to measure results.

Currently, the IEEE Standards Association Quantum Computing Working Group is developing two standards. One is for quantum computing definitions and nomenclature so we can all speak the same language. The other addresses performance metrics and performance benchmarking to enable measurement of quantum computers’ performance against classical computers and, ultimately, each other.

The need for additional standards will become clear over time.

via Click on the link for the full article