Water Abundance Xprize’s $1.5M winner shows how to source fresh water from the air

You may remember that back in May, the Water Abundance Xprize named the five finalists in its contest to demonstrate the sustainable and scalable collection of water from the air. Interestingly, none of those finalists were the winner — after one dropped out, an eliminated team stepped in and took the prize.

The goal of the program was to  collect “a minimum of 2,000 liters of water per day from the atmosphere using 100 percent renewable energy, at a cost of no more than 2 cents per liter.” No simple task! In fact, I would have guessed it was an impossible one.

But many teams made the attempt anyway and with a variety of approaches at that. The runner up, Hawaii’s JMCC Wing, combined a large, super-efficient wind turbine with a commercial condenser unit.

The winner was Skysource/Skywater Alliance, which has already deployed many of its units abroad (and, apparently, at Miranda Kerr’s house). They can run off the grid or alternative power sources, and use an extremely efficient adiabatic distillation method.

It’s cheaper and more efficient than desalination, and doesn’t require the presence of nearby water sources or rain. Skywater boxes, which range from somewhat smaller to rather larger than a refrigerator, can produce up to 300 gallons per day; that’s about 1135 liters, so two of them would meet the contest’s requirements if the cost was low enough and it was running on renewables.

That was sufficiently demonstrated to the Xprize inspection teams, it seems, and the team was this weekend awarded the $1.5 million top prize despite not making it into the finals.

“It has been pretty intense but its really been exciting for me to see water come out of our system, because this is connected to real lives in the world,” said team member Jay Hasty in an Xprize video.

This doesn’t mean water scarcity is a solved problem by a long shot — but competitions like this are great ways of promoting new development in a space and also creating awareness of it. Hopefully Skywater systems will be installed where they’re needed, but development will almost certainly continue on those created by the other teams competing for the prize.

via Click on the link for the full article

Los Angeles investors and entrepreneurs launch PledgeLA, a diversity and inclusion program

In an attempt to boost diversity and inclusion efforts and civic engagement between the growing technology industry in Los Angeles and the community that surrounds it, over 80 venture capitalists and entrepreneurs joined the city’s mayor, Eric Garcetti, and the non-profit Annenberg Foundation to announce PledgeLA.

The initiative is one way in which the Los Angeles technology community is attempting to ensure that it does not repeat the same mistakes made by Silicon Valley and San Francisco and alienate fellow citizens who could feel left out of the opportunities created by tech’s rise to prominence in the city.

“L.A.’s tech growth is no accident – it is a tribute to our region’s tradition of creativity, leadership in innovation, and wealth of talent. With PledgeLA, we will promote transparency in a growing sector and open the doors of opportunity to our diverse base of workers, no matter their race, gender, or background,” said Garcetti, in a statement.

As part of the diversity and inclusion effort, the signatories to PledgeLA have agreed to track civic participation and diversity data each year and to make that data publicly available.

The metrics that signatories will track include community engagement statistics like participation in mentorship programs, volunteering, board service, offering internships, using local banks, giving preference to vendors owned by women or minorities, dedicating a portion of annual spending to local impact initiatives, and investing in local Los Angeles startups.

Demographics at funds and startups will also be under the microscope, since signatories have agreed to report on their composition by race, gender, age, sexual orientation, disability status, immigration status, veteran status, educational attainment, socioeconomic origin, tenure at a firm. PledgeLA participants will also need a code of conduct around diversity and inclusion and are required to privilege diversity in corporate hiring practices.

Over the past five years, Los Angeles has emerged as one of the top five destinations in the U.S. for technology investment and corporate development. It’s one of the fastest growing tech hubs in the country with the 100 largest tech companies in L.A. and Orange County reporting a 24 percent increase in employment from the previous year, according to data provided by the Annenberg Foundation.

The local non-profit was instrumental in setting up the PledgeLA initiative, which grew out of discussions that the institute fostered among the Los Angeles venture community.

Nonetheless, diverse talent remains vastly underrepresented in the workforce of the local tech sector. The landmark
PledgeLA initiative grew out of a series of problem-solving sessions within the Los Angeles venture capital community.

“This commitment from L.A.’s venture capitalists and Mayor Garcetti means that change is happening, and this change is good, as long as we can work to make Los Angeles a more diverse, inclusive and community-focused city that benefits everyone,” said Annenberg Foundation Chairman, President and chief executive Wallis Annenberg, in a statement.

For Los Angeles investors like Upfront Ventures partner, Kobie Fuller, diverse hiring practices are just good business sense.

“Investing in a diverse array of founders, looking for talent in all corners of the city, and bringing different voices to the table when making decisions on investments is just smart business,” Fuller said in a statement. “We know companies with a diverse workforce are more successful, which, in turn, increases community engagement and provides opportunities for the community-at-large. PledgeLA will put Los Angeles on the right trajectory.”

Nearly every large investment firm and Los Angeles based company agreed to sign on to the pledge with at least three notable exceptions. Neither Snap, SpaceX, nor Tesla appear on the list of companies willing to participate in the diversity pledge.

via Click on the link for the full article

Looks like Tiger Global Management just closed the second biggest venture fund this year

As we noted six months ago, Tiger Global Management, the 17-year-old investment group, is starting to see a whole lot of its venture-related startup investments pay off. We also guessed that because of those wins, the outfit was likely lining up commitments for a new mega fund.

It was a safe bet. According to a new report from the Financial Times, the New York-based outfit has just closed its newest venture vehicle with $3.75 billion after actively marketing it for just six weeks.

That makes Tiger’s new vehicle one of the largest venture funds in a world dotted with ever-bigger pools of capital, including SoftBank’s massive $93 billion Vision Fund, which closed last year, and the newest global fund assembled by Sequoia, which closed with $8 billion in capital commitments in late summer, a record-breaking amount for the storied venture firm.

In fact, Tiger’s new fund may be the second largest venture fund to close so far this year, just beating out YF Capital’s newest pool, which closed with $2.5 billion in July, and numerous other firms to close billion-dollar-plus funds in 2018, including Tunlan Investment’s Xiong’An Global Blockchain Innovation Fund, which closed in April with $1.6 billion; Lightspeed Venture Partners, which closed on $1.8 billion in capital across two new funds in July; and General Catalyst, which gathered up at least $1.375 billion in capital commitments earlier this year.

According to the FT,  the Tiger fund closed exactly a week ago and will focus on consumer Internet, cloud computing, Industry software as well as direct-to-consumer companies in the U.S.,  China and, India, where, according to The Economic Times, Tiger is stepping up its investments after hitting the pause button for a few years.

Tiger’s apparent inspiration: the reported $3.3 billion it recently made from an early bet on Flipkart, which sold the majority of its e-commerce business to retail leviathan Walmart in May for $16 billion.

Other recent exits that Tiger’s investors have surely liked seeing include the sale of Glassdoor, the jobs and salary website, that was acquired by the Japanese human resources company Recruit Holdings for $1.2 billion in cash back in May; and Spotify’s direct listing on the stock market back in April. Tiger  owned 7.2 percent of the streaming media company as of its first day of trading. The listing provided Spotify with a market cap of $30 billion; its current market cap is down slightly to $26 billion.

Even more recently, Eventbrite and SurveyMonkey — two Tiger portfolio companies — have gone public, though both have handed back some gains since their IPOs last month. Eventbrite opened at $36 per share and is currently trading at $26 per share; SurveyMonkey has lost roughly one-third of its value since its first trading day.

Tiger was founded by Chase Coleman, a protégé of hedge fund pioneer Julian Robertson. According to the FT, the outfit now manages roughly $26 billion and about half of that is being funneled into venture-backed startups, with portfolio manager Lee Fixel largely overseeing its venture bets.

Some of its investments have been contrarian and underscore the extent to which Tiger is not like other investment firms. it took a stake of more than $1 billion in SoftBank Group earlier this year, for example. Tiger is also an investor in the e-cig company Juul, which has come under intense regulatory scrutiny in recent months but remains among the fastest-growing companies in the Bay Area right now.

Interestingly, even SoftBank’s Vision Fund — which is currently in an uncomfortable public position, having raised nearly half its money from Saudi Arabia’s Crown Prince Mohammed bin Salman — couldn’t invest in the company if it wanted to. According to Jeff Housenbold, a managing director with SoftBank’s Vision Fund, like many other venture investors, SoftBank has prohibitive clauses in its agreement with its own backers that include pornography, alcohol, drugs, weapons and tobacco.

Whether Tiger has also raised money from sovereign wealth funds, we don’t know. The firm has never publicly disclosed who, beyond its employees, owns shares in the firm.

via Click on the link for the full article

Guillermo del Toro is making a stop-motion Pinocchio movie for Netflix

Guillermo del Toro, the Academy Award-winning director of “The Shape of Water” (not to mention “Hellboy,” “Pan’s Labyrinth” and “Pacific Rim”) is making a new version of “Pinocchio” for Netflix.

I’d thought that after del Toro’s awards victory earlier this year, he might finally make his long-thwarted adaptation “At the Mountains of Madness.” And while I’m not giving up hope that I’ll see a del Toro-helmed version of the classic H.P. Lovecraft horror story one day, it seems that he’s going in a different direction for now.

The official announcement from Netflix describes this as del Toro’s “lifelong passion project,” and says that it will be both a stop-motion animated film and a musical.

“No art form has influenced my life and my work more than animation and no single character in history has had as deep of a personal connection to me as Pinocchio,” del Toro said in a statement. “In our story, Pinocchio is an innocent soul with an uncaring father who gets lost in a world he cannot comprehend. He embarks on an extraordinary journey that leaves him with a deep understanding of his father and the real world. I’ve wanted to make this movie for as long as I can remember.”

This isn’t the director’s first project for Netflix — he previously created the animated series “Trollhunters,” and he has another series in the works for the streaming service, “Guillermo del Toro Presents 10 After Midnight.” Netflix says that in addition to directing the film with Mark Gustafson (“The Fantastic Mr. Fox”), he will co-write and co-produce it. The Jim Henson Company (which is also making a “Dark Crystal” prequel series for Netflix) and ShadowMachine are producing as well.

Netflix also announced today that it’s raising an additional $2 billion in debt to fund its original content plans. It says production on “Pinocchio” will begin this fall.

via Click on the link for the full article

Uber’s head of corporate development resigns

Less than one month after the Wall Street Journal reported allegations of sexual misconduct by Cameron Poetzscher, Uber’s head of corporate development, Poetzscher has resigned, according to the WSJ.

Uber has confirmed Poetzscher’s resignation to TechCrunch. While Uber searches for a new corporate development lead, Uber CFO Nelson Chai will oversee Poetzscher’s duties.

“We thank Cam for his four and a half years of service to Uber,” an Uber spokesperson told TechCrunch.

Uber had previously hired an outside firm to look into allegations against Poetzscher. That firm reportedly found Poetzscher did indeed have a history of making sexual remarks about female Uber employees. Uber reportedly proceeded to give him a formal warning, reduced his annual bonus and required that Poetzscher take sensitivity training.

However, Uber later promoted him to acting head of finance. At the time of the WSJ’s article last month, Poetzscher said he was “rightfully disciplined” and that he had “learned from this error in judgment.”

Developing…

via Click on the link for the full article

The future of photography is code

What’s in a camera? A lens, a shutter, a light-sensitive surface, and, increasingly, a set of highly sophisticated algorithms. While the physical components are still improving bit by bit, Google, Samsung, and Apple are increasingly investing in (and showcasing) improvements wrought entirely from code. Computational photography is the only real battleground now.

The reason for this shift is pretty simple: cameras can’t get too much better than they are right now, or at least not without some rather extreme shifts in how they work. Here’s how smartphone makers hit the wall on photography, and how they were forced to jump over it.

Not enough buckets

An image sensor one might find in a digital camera.

The sensors in our smartphone cameras are truly amazing things. The work that’s been done by the likes of Sony, Omnivision, Samsung and others to design and fabricate tiny yet sensitive and versatile chips is really pretty mind-blowing. For a photographer who’s watched the evolution of digital photography from the early days, the level of quality these microscopic sensors deliver is nothing short of astonishing.

But there’s no Moore’s Law for those sensors. Or rather, just as Moore’s Law is now running into quantum limits at sub-10-nanometer levels, camera sensors hit physical limits much earlier. Think about light hitting the sensor as rain falling on a bunch of buckets; you can place bigger buckets, but there are fewer of them; you can put smaller ones, but they can’t catch as much each; you can make them square or stagger them or do all kinds of other tricks, but ultimately there are only so many raindrops and no amount of bucket-rearranging can change that.

Sensors are getting better, yes, but not only is this pace too slow to keep consumers buying new phones year after year (imagine trying to sell a camera that’s 3 percent better), but phone manufacturers often use the same or similar camera stacks, so the improvements (like the recent switch to backside illumination) are shared amongst them. So no one is getting ahead on sensors alone.

Perhaps they could improve the lens? Not really. Lenses have arrived at a level of sophistication and perfection that is hard to improve on, especially at small scale. To say space is limited inside a smartphone’s camera stack is a major understatement — there’s hardly a square micron to spare. You might be able to improve them slightly as far as how much light passes through and how little distortion there is, but these are old problems that have been mostly optimized.

The only way to gather more light would be to increase the size of the lens, either by having it A: project outwards from the body; B: displace critical components within the body; or C: increase the thickness of the phone. Which of those options does Apple seem likely to find acceptable?

In retrospect it was inevitable that Apple (and Samsung, and Huawei, and others) would have to choose D: none of the above. If you can’t get more light, you just have to do more with the light you’ve got.

Isn’t all photography computational?

The broadest definition of computational photography includes just about any digital imaging at all. Unlike film, even the most basic digital camera requires computation to turn the light hitting the sensor into a usable image. And camera makers differ widely in the way they do this, producing different JPEG processing methods, RAW formats, and color science.

For a long time there wasn’t much of interest on top of this basic layer, partly from a lack of processing power. Sure, there have been filters, and quick in-camera tweaks to improve contrast and color. But ultimately these just amount to automated dial-twiddling.

The first real computational photography features were arguably object identification and tracking for the purposes of autofocus. Face and eye tracking made it easier to capture people in complex lighting or poses, and object tracking made sports and action photography easier as the system adjusted its AF point to a target moving across the frame.

These were early examples of deriving metadata from the image and using it proactively, to improve that image or feeding forward to the next.

In DSLRs, autofocus accuracy and flexibility are marquee features, so this early use case made sense; but outside a few gimmicks, these “serious” cameras generally deployed computation in a fairly vanilla way. Faster image sensors meant faster sensor offloading and burst speeds, some extra cycles dedicated to color and detail preservation, and so on. DSLRs weren’t being used for live video or augmented reality. And until fairly recently, the same was true of smartphone cameras, which were more like point and shoots than the all-purpose media tools we know them as today.

The limits of traditional imaging

Despite experimentation here and there and the occasional outlier, smartphone cameras are pretty much the same. They have to fit within a few millimeters of depth, which limits their optics to a few configurations. The size of the sensor is likewise limited — a DSLR might use an APS-C sensor 23 by 15 millimeters across, making an area of 345 mm2; the sensor in the iPhone XS, probably the largest and most advanced on the market right now, is 7 by 5.8mm or so, for a total of 40.6 mm2.

Roughly speaking it’s collecting an order of magnitude less light than a “normal” camera, but is expected to reconstruct a scene with roughly the same fidelity, colors, and such — around the same number of megapixels, too. On its face this is sort of an impossible problem.

Improvements in the traditional sense help out — optical and electronic stabilization, for instance, make it possible to expose for longer without blurring, collecting more light. But these devices are still being asked to spin straw into gold.

Luckily, as I mentioned, everyone is pretty much in the same boat. Because of the fundamental limitations in play, there’s no way Apple or Samsung can reinvent the camera or come up with some crazy lens structure that puts them leagues ahead of the competition. They’ve all been given the same basic foundation.

All competition therefore comprises what these companies build on top of that foundation.

Image as stream

The key insight in computational photography is that an image coming from a digital camera’s sensor isn’t a snapshot, the way it is generally thought of. In traditional cameras the shutter opens and closes, exposing the light-sensitive medium for a fraction of a second. That’s not what digital cameras do, or at least not what they can do.

A camera’s sensor is constantly bombarded with light; rain is constantly falling on the field of buckets, to return to our metaphor, but when you’re not taking a picture, these buckets are bottomless and no one is checking their contents. But the rain is falling nevertheless.

To capture an image the camera system picks point at which to start counting the raindrops, measuring the light that hits the sensor. Then it picks a point to stop. For the purposes of traditional photography, this enables nearly arbitrarily short shutter speeds, which isn’t much use to tiny sensors.

Why not just always be recording? Theoretically you could, but it would drain the battery and produce a lot of heat. Fortunately, in the last few years image processing chips have gotten efficient enough that they can, when the camera app is open, keep a certain duration of that stream — limited resolution captures of the last 60 frames, for instance. Sure, it costs a little battery, but it’s worth it.

Access to the stream allows the camera to do all kinds of things. It adds context.

Context can mean a lot of things. It can be photographic elements like the lighting and distance to subject. But it can also be motion, objects, intention.

A simple example of context is what is commonly referred to as HDR, or high dynamic range imagery. This technique uses multiple images taken in a row with different exposures to more accurately capture areas of the image that might have been underexposed or overexposed in a single exposure. The context in this case is understanding which areas those are and how to intelligently combine the images together.

This can be accomplished with exposure bracketing, a very old photographic technique, but it can be accomplished instantly and without warning if the image stream is being manipulated to produce multiple exposure ranges all the time. That’s exactly what Google and Apple now do.

Something more complex is of course the “portrait mode” and artificial background blur or bokeh that is becoming more and more common. Context here is not simply the distance of a face, but an understanding of what parts of the image constitute a particular physical object, and the exact contours of that object. This can be derived from motion in the stream, from stereo separation in multiple cameras, and from machine learning models that have been trained to identify and delineate human shapes.

These techniques are only possible, first, because the requisite imagery has been captured from the stream in the first place (an advance in image sensor and RAM speed), and second, because companies developed highly efficient algorithms to perform these calculations, trained on enormous datasets and immense amounts of computation time.

What’s important about these techniques, however, is not simply that they can be done, but that one company may do them better than the other. And this quality is entirely a function of the software engineering work and artistic oversight that goes into them.

DxOMark did a comparison of some early artificial bokeh systems; the results, however, were somewhat unsatisfying. It was less a question of which looked better, and more of whether they failed or succeeded in applying the effect. Computational photography is in such early days that it is enough for the feature to simply work to impress people. Like a dog walking on its hind legs, we are amazed that it occurs at all.

But Apple has pulled ahead with what some would say is an almost absurdly over-engineered solution to the bokeh problem. It didn’t just learn how to replicate the effect — it used the computing power it has at its disposal to create virtual physical models of the optical phenomenon that produces it. It’s like the difference between animating a bouncing ball and simulating realistic gravity and elastic material physics.

Why go to such lengths? Because Apple knows what is becoming clear to others: that it is absurd to worry about the limits of computational capability at all. There are limits to how well an optical phenomenon can be replicated if you are taking shortcuts like Gaussian blurring. There are no limits to how well it can be replicated if you simulate it at the level of the photon.

Similarly the idea of combining five, ten, or a hundred images together into a single HDR image seems absurd but the truth is that in photography, more information is almost always better. If the cost of these computational acrobatics is negligible and the results measurable, why shouldn’t our devices be performing these calculations? In a few years they too will seem ordinary.

If the result is a better product, the computational power and engineering ability has been deployed with success; just as Leica or Canon might spend millions to eke fractional performance improvements out of a stable optical system like a $2,000 zoom lens, Apple and others are spending money where they can create value: not in glass, but in silicon.

Double vision

One trend that may appear to conflict with the computational photography narrative I’ve described is the advent of systems comprising multiple cameras.

This technique doesn’t add more light to the sensor — that would be prohibitively complex and expensive optically, and probably wouldn’t work anyway. But if you can free up a little space lengthwise (rather than depthwise, which we found impractical) you can put a whole separate camera right by the first that captures photos extremely similar to those taken by the first.

A mockup of what a line of color iPhones could look like.

Now, if all you want to do is re-enact Wayne’s World at an imperceptible scale (camera one, camera two… camera one, camera two…) that’s all you need. But no one actually wants to take two images simultaneously, a fraction of an inch apart.

These two cameras operate either independently (as wide-angle and zoom) or one is used to augment the other, forming a single system with multiple inputs.

The thing is that taking the data from one camera and using it to enhance the data from another is — you guessed it — extremely computationally intensive. It’s like the HDR problem of multiple exposures, except far more complex as the images aren’t taken with the same lens and sensor. It can be optimized, but that doesn’t make it easy.

So although adding a second camera is indeed a way to improve the imaging system by physical means, the possibility only exists because of the state of computational photography. And it is the quality of that computational imagery that results in a better photograph — or doesn’t. The Light camera with its 16 sensors and lenses is an example of an ambitious effort that simply didn’t produce better images, though it was using established computational photography techniques to harvest and winnow an even larger collection of images.

Light and code

The future of photography is computational, not optical. This is a massive shift in paradigm and one that every company that makes or uses cameras is currently grappling with. There will be repercussions in traditional cameras like SLRs (rapidly giving way to mirrorless systems), in phones, in embedded devices, and everywhere that light is captured and turned into images.

Sometimes this means that the cameras we hear about will be much the same as last year’s, as far as megapixel counts, ISO ranges, f-numbers, and so on. That’s okay. With some exceptions these have gotten as good as we can reasonably expect them to be: glass isn’t getting any clearer, and our vision isn’t getting any more acute. The way light moves through our devices and eyeballs isn’t likely to change much.

What those devices do with that light, however, is changing at an incredible rate. This will produce features that sound ridiculous, or pseudoscience babble on stage, or drained batteries. That’s okay, too. Just as we have experimented with other parts of the camera for the last century and brought them to varying levels of perfection, we have moved onto a new, non-physical “part” which nonetheless has a very important effect on the quality and even possibility of the images we take.

via Click on the link for the full article

The future of photography is code

What’s in a camera? A lens, a shutter, a light-sensitive surface, and, increasingly, a set of highly sophisticated algorithms. While the physical components are still improving bit by bit, Google, Samsung, and Apple are increasingly investing in (and showcasing) improvements wrought entirely from code. Computational photography is the only real battleground now.

The reason for this shift is pretty simple: cameras can’t get too much better than they are right now, or at least not without some rather extreme shifts in how they work. Here’s how smartphone makers hit the wall on photography, and how they were forced to jump over it.

Not enough buckets

An image sensor one might find in a digital camera.

The sensors in our smartphone cameras are truly amazing things. The work that’s been done by the likes of Sony, Omnivision, Samsung and others to design and fabricate tiny yet sensitive and versatile chips is really pretty mind-blowing. For a photographer who’s watched the evolution of digital photography from the early days, the level of quality these microscopic sensors deliver is nothing short of astonishing.

But there’s no Moore’s Law for those sensors. Or rather, just as Moore’s Law is now running into quantum limits at sub-10-nanometer levels, camera sensors hit physical limits much earlier. Think about light hitting the sensor as rain falling on a bunch of buckets; you can place bigger buckets, but there are fewer of them; you can put smaller ones, but they can’t catch as much each; you can make them square or stagger them or do all kinds of other tricks, but ultimately there are only so many raindrops and no amount of bucket-rearranging can change that.

Sensors are getting better, yes, but not only is this pace too slow to keep consumers buying new phones year after year (imagine trying to sell a camera that’s 3 percent better), but phone manufacturers often use the same or similar camera stacks, so the improvements (like the recent switch to backside illumination) are shared amongst them. So no one is getting ahead on sensors alone.

Perhaps they could improve the lens? Not really. Lenses have arrived at a level of sophistication and perfection that is hard to improve on, especially at small scale. To say space is limited inside a smartphone’s camera stack is a major understatement — there’s hardly a square micron to spare. You might be able to improve them slightly as far as how much light passes through and how little distortion there is, but these are old problems that have been mostly optimized.

The only way to gather more light would be to increase the size of the lens, either by having it A: project outwards from the body; B: displace critical components within the body; or C: increase the thickness of the phone. Which of those options does Apple seem likely to find acceptable?

In retrospect it was inevitable that Apple (and Samsung, and Huawei, and others) would have to choose D: none of the above. If you can’t get more light, you just have to do more with the light you’ve got.

Isn’t all photography computational?

The broadest definition of computational photography includes just about any digital imaging at all. Unlike film, even the most basic digital camera requires computation to turn the light hitting the sensor into a usable image. And camera makers differ widely in the way they do this, producing different JPEG processing methods, RAW formats, and color science.

For a long time there wasn’t much of interest on top of this basic layer, partly from a lack of processing power. Sure, there have been filters, and quick in-camera tweaks to improve contrast and color. But ultimately these just amount to automated dial-twiddling.

The first real computational photography features were arguably object identification and tracking for the purposes of autofocus. Face and eye tracking made it easier to capture people in complex lighting or poses, and object tracking made sports and action photography easier as the system adjusted its AF point to a target moving across the frame.

These were early examples of deriving metadata from the image and using it proactively, to improve that image or feeding forward to the next.

In DSLRs, autofocus accuracy and flexibility are marquee features, so this early use case made sense; but outside a few gimmicks, these “serious” cameras generally deployed computation in a fairly vanilla way. Faster image sensors meant faster sensor offloading and burst speeds, some extra cycles dedicated to color and detail preservation, and so on. DSLRs weren’t being used for live video or augmented reality. And until fairly recently, the same was true of smartphone cameras, which were more like point and shoots than the all-purpose media tools we know them as today.

The limits of traditional imaging

Despite experimentation here and there and the occasional outlier, smartphone cameras are pretty much the same. They have to fit within a few millimeters of depth, which limits their optics to a few configurations. The size of the sensor is likewise limited — a DSLR might use an APS-C sensor 23 by 15 millimeters across, making an area of 345 mm2; the sensor in the iPhone XS, probably the largest and most advanced on the market right now, is 7 by 5.8mm or so, for a total of 40.6 mm2.

Roughly speaking it’s collecting an order of magnitude less light than a “normal” camera, but is expected to reconstruct a scene with roughly the same fidelity, colors, and such — around the same number of megapixels, too. On its face this is sort of an impossible problem.

Improvements in the traditional sense help out — optical and electronic stabilization, for instance, make it possible to expose for longer without blurring, collecting more light. But these devices are still being asked to spin straw into gold.

Luckily, as I mentioned, everyone is pretty much in the same boat. Because of the fundamental limitations in play, there’s no way Apple or Samsung can reinvent the camera or come up with some crazy lens structure that puts them leagues ahead of the competition. They’ve all been given the same basic foundation.

All competition therefore comprises what these companies build on top of that foundation.

Image as stream

The key insight in computational photography is that an image coming from a digital camera’s sensor isn’t a snapshot, the way it is generally thought of. In traditional cameras the shutter opens and closes, exposing the light-sensitive medium for a fraction of a second. That’s not what digital cameras do, or at least not what they can do.

A camera’s sensor is constantly bombarded with light; rain is constantly falling on the field of buckets, to return to our metaphor, but when you’re not taking a picture, these buckets are bottomless and no one is checking their contents. But the rain is falling nevertheless.

To capture an image the camera system picks point at which to start counting the raindrops, measuring the light that hits the sensor. Then it picks a point to stop. For the purposes of traditional photography, this enables nearly arbitrarily short shutter speeds, which isn’t much use to tiny sensors.

Why not just always be recording? Theoretically you could, but it would drain the battery and produce a lot of heat. Fortunately, in the last few years image processing chips have gotten efficient enough that they can, when the camera app is open, keep a certain duration of that stream — limited resolution captures of the last 60 frames, for instance. Sure, it costs a little battery, but it’s worth it.

Access to the stream allows the camera to do all kinds of things. It adds context.

Context can mean a lot of things. It can be photographic elements like the lighting and distance to subject. But it can also be motion, objects, intention.

A simple example of context is what is commonly referred to as HDR, or high dynamic range imagery. This technique uses multiple images taken in a row with different exposures to more accurately capture areas of the image that might have been underexposed or overexposed in a single exposure. The context in this case is understanding which areas those are and how to intelligently combine the images together.

This can be accomplished with exposure bracketing, a very old photographic technique, but it can be accomplished instantly and without warning if the image stream is being manipulated to produce multiple exposure ranges all the time. That’s exactly what Google and Apple now do.

Something more complex is of course the “portrait mode” and artificial background blur or bokeh that is becoming more and more common. Context here is not simply the distance of a face, but an understanding of what parts of the image constitute a particular physical object, and the exact contours of that object. This can be derived from motion in the stream, from stereo separation in multiple cameras, and from machine learning models that have been trained to identify and delineate human shapes.

These techniques are only possible, first, because the requisite imagery has been captured from the stream in the first place (an advance in image sensor and RAM speed), and second, because companies developed highly efficient algorithms to perform these calculations, trained on enormous datasets and immense amounts of computation time.

What’s important about these techniques, however, is not simply that they can be done, but that one company may do them better than the other. And this quality is entirely a function of the software engineering work and artistic oversight that goes into them.

DxOMark did a comparison of some early artificial bokeh systems; the results, however, were somewhat unsatisfying. It was less a question of which looked better, and more of whether they failed or succeeded in applying the effect. Computational photography is in such early days that it is enough for the feature to simply work to impress people. Like a dog walking on its hind legs, we are amazed that it occurs at all.

But Apple has pulled ahead with what some would say is an almost absurdly over-engineered solution to the bokeh problem. It didn’t just learn how to replicate the effect — it used the computing power it has at its disposal to create virtual physical models of the optical phenomenon that produces it. It’s like the difference between animating a bouncing ball and simulating realistic gravity and elastic material physics.

Why go to such lengths? Because Apple knows what is becoming clear to others: that it is absurd to worry about the limits of computational capability at all. There are limits to how well an optical phenomenon can be replicated if you are taking shortcuts like Gaussian blurring. There are no limits to how well it can be replicated if you simulate it at the level of the photon.

Similarly the idea of combining five, ten, or a hundred images together into a single HDR image seems absurd but the truth is that in photography, more information is almost always better. If the cost of these computational acrobatics is negligible and the results measurable, why shouldn’t our devices be performing these calculations? In a few years they too will seem ordinary.

If the result is a better product, the computational power and engineering ability has been deployed with success; just as Leica or Canon might spend millions to eke fractional performance improvements out of a stable optical system like a $2,000 zoom lens, Apple and others are spending money where they can create value: not in glass, but in silicon.

Double vision

One trend that may appear to conflict with the computational photography narrative I’ve described is the advent of systems comprising multiple cameras.

This technique doesn’t add more light to the sensor — that would be prohibitively complex and expensive optically, and probably wouldn’t work anyway. But if you can free up a little space lengthwise (rather than depthwise, which we found impractical) you can put a whole separate camera right by the first that captures photos extremely similar to those taken by the first.

A mockup of what a line of color iPhones could look like.

Now, if all you want to do is re-enact Wayne’s World at an imperceptible scale (camera one, camera two… camera one, camera two…) that’s all you need. But no one actually wants to take two images simultaneously, a fraction of an inch apart.

These two cameras operate either independently (as wide-angle and zoom) or one is used to augment the other, forming a single system with multiple inputs.

The thing is that taking the data from one camera and using it to enhance the data from another is — you guessed it — extremely computationally intensive. It’s like the HDR problem of multiple exposures, except far more complex as the images aren’t taken with the same lens and sensor. It can be optimized, but that doesn’t make it easy.

So although adding a second camera is indeed a way to improve the imaging system by physical means, the possibility only exists because of the state of computational photography. And it is the quality of that computational imagery that results in a better photograph — or doesn’t. The Light camera with its 16 sensors and lenses is an example of an ambitious effort that simply didn’t produce better images, though it was using established computational photography techniques to harvest and winnow an even larger collection of images.

Light and code

The future of photography is computational, not optical. This is a massive shift in paradigm and one that every company that makes or uses cameras is currently grappling with. There will be repercussions in traditional cameras like SLRs (rapidly giving way to mirrorless systems), in phones, in embedded devices, and everywhere that light is captured and turned into images.

Sometimes this means that the cameras we hear about will be much the same as last year’s, as far as megapixel counts, ISO ranges, f-numbers, and so on. That’s okay. With some exceptions these have gotten as good as we can reasonably expect them to be: glass isn’t getting any clearer, and our vision isn’t getting any more acute. The way light moves through our devices and eyeballs isn’t likely to change much.

What those devices do with that light, however, is changing at an incredible rate. This will produce features that sound ridiculous, or pseudoscience babble on stage, or drained batteries. That’s okay, too. Just as we have experimented with other parts of the camera for the last century and brought them to varying levels of perfection, we have moved onto a new, non-physical “part” which nonetheless has a very important effect on the quality and even possibility of the images we take.

via Click on the link for the full article

Richard Branson steps down as chairman of Virgin Hyperloop One

Richard Branson has reportedly stepped down from the chairman role of Virgin Hyperloop One.

In a statement, cited by Reuters, Branson said that the company would require a more time than he could devote to the company.

“At this stage in the company’s evolution, I feel it needs a more hands-on Chair, who can focus on the business and these opportunities. It will be difficult for me to fulfill that commitment as I already devote significant time to my philanthropic ventures and the many business within the Virgin Group.”

What wasn’t mentioned was the cancellation of a planned project with Saudi Arabia after Branson criticized the kingdom and suspended negotiations around an intended $1 billion investment from the nation’s Public Investment Fund into Virgin’s space operations.

Branson is one of several business leaders who have cut ties with the Kingdom of Saudi Arabia following the alleged assassination and dismemberment of dissident journalist Jamal Khashoggi.

Virgin Hyperloop’s largest investor is the United Arab Emirates shipping and logistics company DP World. Earlier this year, the two companies launched a logistics joint venture that would bring hyperloop technologies to the industry. DP World first invested in what is now Virgin Hyperloop back in 2016.

Earlier this month, Virgin Hyperloop released the results of a feasibility study conducted with Black & Veatch of the company’s planned route through Missouri to link St. Louis and Kansas City.

The independent report, authored by global infrastructure solutions company Black & Veatch, analyzes a proposed route through the I-70 corridor, the major highway traversing Missouri, and verifies the favorable safety and sustainability opportunities this new mode of transportation offers.

“A feasibility study of this depth represents the first phase of actualization of a full-scale commercial hyperloop system, both for passengers and cargo in the United States,” said chief executive Rob Lloyd, at the time. “We are especially proud that Missouri, with its iconic status in the history of U.S. transportation as the birthplace of the highway system, could be the keystone of a nation-wide network. The resulting socio-economic benefits will have enormous regional and national impact.”

In the U.S. Colorado and Ohio are also examining feasibility studies for Virgin Hyperloop technologies, while other projects are underway in India and the United Arab Emirates.

Hyperloop technology developers like Virgin Hyperloop One, Hyperloop Transportation Technologies, all get their inspiration from early plans drawn up by Elon Musk.

As we wrote at the time:

The Hyperloop features tubes with a low level of pressurization that would contain pods with skis made of the SpaceX alloy inconel, which is designed to withstand high pressure and heat. Air exiting those skis through tiny holes would create an air cushion on which the pods would ride, and they’d be propelled by air jet inlets. And all of that would cost only around $6 billion, according to Musk.

We’ve reached out to a spokesperson from Virgin Hyperloop One for comment and will update when we hear back.

via Click on the link for the full article

Ford expands self-driving vehicle program to Washington D.C.

Ford is bringing its autonomous vehicles to Washington D.C., the fourth city to join the automaker’s testing program as it prepares to launch a self-driving taxi and delivery service in 2021.

Ford will begin testing its self-driving vehicles in the district in the first quarter of 2019. The company is already is testing in Detroit, Pittsburgh and Miami. 

Ford is a bit different from other companies that have launched autonomous vehicle pilots in the U.S. Ford is pursuing two parallels tracks—testing the business model and autonomous technology—that will eventually combine ahead of its commercial launch in 2021.

Argo AI, the Pittsburgh-based company that Ford invested $1 billion into in 2017, is developing the virtual driver system and high-definition maps designed for Ford’s self-driving vehicles. Meanwhile, Ford is testing out its go-to-market strategy through pilot programs with partners like Dominos and Postmates, and even some local businesses.

The testing program in DC will follow that same thinking with an emphasis on job creation and equitable deployment. Ford says its autonomous vehicles will be in all eight of the district’s wards. Eventually, it will operate business pilot programs in all eight wards as well . Ford has already established an autonomous vehicle operations terminal in Ward 5, where it will house, manage and conduct routine maintenance on the fleet and continue developing its vehicle management process.

Argo AI already has vehicles on DC’s streets, mapping roads in the first step toward testing in autonomous mode, the company said.

“Both Ford and district officials are committed to exploring how self-driving vehicles can be deployed in an equitable way across the various neighborhoods that make up Washington, D.C., and in a way that promotes job creation,” Sherif Marakby, CEO, Ford Autonomous Vehicles LLC wrote in a Medium post Monday.

Marakby underscored a recent report by Securing America’s Future Energy that found autonomous technology could improve people’s access to jobs as well as retail markets.

Ford announced in July 2018 plans to spend $4 billion through 2023 in a newly created LLC dedicated to building out an autonomous vehicles business. The new entity, Ford Autonomous Vehicles LLC, houses the company’s self-driving systems integration, autonomous-vehicle research and advanced engineering, AV transportation-as-a-service network development, user experience, business strategy and business development teams. The spending plan includes a $1 billion investment in startup Argo AI.

via Click on the link for the full article

Ford expands self-driving vehicle program to Washington D.C.

Ford is bringing its autonomous vehicles to Washington D.C., the fourth city to join the automaker’s testing program as it prepares to launch a self-driving taxi and delivery service in 2021.

Ford will begin testing its self-driving vehicles in the district in the first quarter of 2019. The company is already is testing in Detroit, Pittsburgh and Miami. 

Ford is a bit different from other companies that have launched autonomous vehicle pilots in the U.S. Ford is pursuing two parallels tracks—testing the business model and autonomous technology—that will eventually combine ahead of its commercial launch in 2021.

Argo AI, the Pittsburgh-based company that Ford invested $1 billion into in 2017, is developing the virtual driver system and high-definition maps designed for Ford’s self-driving vehicles. Meanwhile, Ford is testing out its go-to-market strategy through pilot programs with partners like Dominos and Postmates, and even some local businesses.

The testing program in DC will follow that same thinking with an emphasis on job creation and equitable deployment. Ford says its autonomous vehicles will be in all eight of the district’s wards. Eventually, it will operate business pilot programs in all eight wards as well . Ford has already established an autonomous vehicle operations terminal in Ward 5, where it will house, manage and conduct routine maintenance on the fleet and continue developing its vehicle management process.

Argo AI already has vehicles on DC’s streets, mapping roads in the first step toward testing in autonomous mode, the company said.

“Both Ford and district officials are committed to exploring how self-driving vehicles can be deployed in an equitable way across the various neighborhoods that make up Washington, D.C., and in a way that promotes job creation,” Sherif Marakby, CEO, Ford Autonomous Vehicles LLC wrote in a Medium post Monday.

Marakby underscored a recent report by Securing America’s Future Energy that found autonomous technology could improve people’s access to jobs as well as retail markets.

via Click on the link for the full article