Analyst Watch: Cloud native means Kubernetes

Attending the Pivotal SpringOne conference last month has hit home how important is the alignment around Kubernetes in the cloud-native technology world. This event is a developer conference for the popular Java web framework Spring — Pivotal was keen to quote from the recent JetBrains survey that the two most popular offerings in this category are Spring Boot (56%) and Spring MVC (43%), the next most popular stood at 6%.

This report reflects my experiences at the event wearing Kubernetes-tinted glasses. The reason for this is that to play in the cloud-native world today you need to be part of the open-source Kubernetes ecosystem. The pace of innovation is so rapid that it makes no sense to replicate your own equivalent, whether closed or open source, and try to keep up with the efforts of a much larger community.

Kubernetes has emerged as a de facto standard in cloud-native computing and it has achieved that because it is open source, vendor neutral, and its timing was perfect in solving the need to manage containers. Originated by Google, the open source project is today owned by the Cloud Native Computing Foundation (CNCF), a non-profit, vendor-neutral organization in turn owned by the Linux Foundation.  The Kubernetes ecosystem allows many players, from startups to Internet giants, participation in cloud-native computing, fueling its growth and evolution

VMware CEO Pat Gelsinger was invited on the opening keynote stage and talked about the company’s acquisition of Pivotal. VMware gave birth to Pivotal as an external partnership with Dell and EMC (now part of Dell), and bringing Pivotal inside VMware is a strategic move that is all about Kubernetes. VMware recently announced a Kubernetes native vSphere (project Pacific), and project Tanzu — a build, run, and manage offering for cloud-native applications, with again Kubernetes at the center. This shows the pieces of VMware’s strategy falling in place, explaining why the embracing of Pivotal technology even tighter within the VMware stack makes a lot of sense.

All the public cloud players want to facilitate Kubernetes-based cloud-native applications and with VMware playing as the middleware for the cloud, it can benefit in multiple ways: Pivotal gives it the grassroots developers, and its VM infrastructure stack attracts its enterprise customer base to the public cloud players who want to run those enterprise workloads.

The cloud is also a strategic play for Google, and it has played a benign role, supporting the open-source community. So, it came as a surprise that two important open-source projects in the Kubernetes ecosystem, Knative and Istio, expected to join CNCF, will remain managed by Google. While there is nothing wrong about this move by Google, given how much investment is flowing into the Kubernetes world, there will be suspicion that Google will steer these projects towards serving the Google Cloud better than rival clouds. It remains to be seen whether Google can convince the community of the wisdom of its announcement to control these projects or whether the community decides it is better to create a fork.

Moving to the cloud, and re-architecting applications to run optimally on the cloud, is at the heart of digital transformation. Pivotal provides cloud technology but to many of its target large enterprises, the re-architecting element is a huge undertaking. To help enable this is Pivotal Labs, the consulting body that helps enterprises master agile and DevOps and enter the brave new world of containers, microservices and more. Large enterprises carry a huge amount of legacy code (‘heritage’ is a nice term I heard used), and to help transform this, Pivotal Labs has created agile techniques for large system modernization. This is a combination of methodologies and tools, such as the Boris method – a process for mapping architecture components (named after The Who song, “Boris the Spider”), and the SNAP (Snap Not Analysis Paralysis) method, which uses a Pivotal tool called App Analyzer that automatically analyses code and provides complexity analytics, such as degree of monolithic coupling across components. The Labs concept is a huge success, and key competitor IBM Red Hat has emulated it, but is difficult to scale in large multinationals. The path taken by two customers, Dell and a large telco, is to create an internal version of Labs and have Pivotal train the trainers. 

Pivotal arranged a customer panel to talk to analysts. My two take-aways were: 1) tools do matter — while culture and people clearly matter, good tools (with first-class automation) make a huge difference in easing digital transformation, and Pivotal Platform (formerly Pivotal Cloud Foundry) plays this role;  and 2) finding recruits with cloud-native skills is a persistent challenge. There is a large gap in our educational systems: colleges are not turning out vocationally trained software engineers and so graduate recruits into high tech have no idea what cloud native computing is. Therefore, it’s not just a question of reskilling IT staff whose comfort zone is waterfall; it’s also necessary to train new recruits.  

Finally, a major announcement at the event is the partnership with Microsoft on Azure Spring Cloud. In development since February 2019 and going live in early 2020, the arrangement gives developers a seamless and effortless move to the Azure public cloud via Spring; the platform takes care of all infrastructure concerns, providing ‘Spring Cloud as a Service,’ and is a managed service by both companies. Platforms like Spring carry significant enterprise workloads, and as observed above, public cloud providers want large enterprises as customers, so expect to see interest from other public clouds.     

The post Analyst Watch: Cloud native means Kubernetes appeared first on SD Times.

via Click on the link for the full article

SD Times news digest: Visual Studio adds Live Share app casting and contacts, Rockset Partners with Tableau, Android 10 introduces privacy protections for physical activity

Microsoft introduced contacts in Live Share that enables developers to collaborate with recent and contextual collaborators in VS 16.4. In-built audio calling has also been enhanced.

“With app casting your debugging sessions can be a powerful place to do real-time collaboration and make progress on hard bugs. With direct invitations and status sharing with contacts you now have a new ease to your collaboration process.”

The full details on the new feature are available here.

Rockset Partners with Tableau for real-time dashboards on NoSQL Data
Serverless search and analytics company Rockset announced that it partnered with Tableau. This brings users the ability to build interactive, live Tableau dashboards on NoSQL data without requiring any coding.

Rockset captures  NoSQL data from sources such as Apache Kafka and Amazon DynamoDB, which then helps Tableau users monitor business data. With Rockset, streaming event data from Kafka is automatically represented as a dynamic SQL table and available for querying in seconds.

The full details on the integration are available here.

Android 10 introduces privacy protections for physical activity 
Android announced a new runtime permission for activity recognition for apps in Android 10.

Starting December 2019, data will be restricted from apps not including the Google Play Services legacy activity recognition permission in the manifest. The system also auto-grants this permission to an app when a user upgrades to Android 10.

The activity recognition runtime permission is required for certain RecordingAPI and HistoryAPI. The full details are available here. 

 

The post SD Times news digest: Visual Studio adds Live Share app casting and contacts, Rockset Partners with Tableau, Android 10 introduces privacy protections for physical activity appeared first on SD Times.

via Click on the link for the full article

Report: Cybersecurity workforce needs to grow 145% to fill skills gap

There is currently a skills gap in cybersecurity, and companies need to go a long way to fill that gap. According to a new study from (ISC)2, the cybersecurity workforce will need to grow 145% in order to close that gap.

The 2019 (ISC)2 Cybersecurity Workforce Study estimates that the cybersecurity workforce is currently made up of 2.8 million individuals, but 4.07 million cybersecurity professionals are needed.

In the US, the gap is significantly smaller than it is worldwide. The US currently has 804,700 cybersecurity professionals, but needs an additional 498,480. This is an increase of 62%.

RELATED CONTENT: 
Companies are making up for lack of cybersecurity professionals by investing in their developers
Report: Security Operations Centers are understaffed

“We’ve been evolving our research approach for 15 years to get to this point today, where we can confidently estimate the current workforce and better understand what it will take as an industry to add enough professionals to protect our critical assets,” said Wesley Simpson, chief operating officer, (ISC)2. “Perhaps more importantly, the study provides actionable insights and strategies for building and growing strong cybersecurity teams. Knowing where we stand and the delta that needs to be filled is a powerful step along the pathway to overcoming our industry’s staffing challenges.”

The report outlines four strategies that companies can use to build the workforce:

  1. Highlight training and professional development opportunities
  2. Properly level setting on applicant qualifications
  3. Attract new workers, such as recent college graduates, consultants, or contractors
  4. Strengthen from within by developing and cross-training current IT professionals

(ISC)2 surveyed 3,237 individuals responsible for security across North America, Europe, Latin America, and Asia-Pacific for the report.

The post Report: Cybersecurity workforce needs to grow 145% to fill skills gap appeared first on SD Times.

via Click on the link for the full article

Is the Hadoop party over?

Fifteen years ago, the Hadoop data management platform was created. This kicked off a land rush of companies  looking to plant their flags in the market and open-source projects began to spring up to extend what the platform was designed to do.

As often happens with technology, it ages, and newer things emerge that either eclipse or consume those earlier works. And both of those things have impacted Hadoop: Cloud providers offered huge data storage that overtook HDFS and the proprietary MapR file system. But industry experts point to execution missteps by the Hadoop platform providers as being equally to blame for what appears to be the decline of these platforms. 

RELATED CONTENT:
Data scientists need to be good storytellers
The problem with data

Things looked bad for the big three in the market. Cloudera and Hortonworks merged to strengthen their offering and streamline operations, but fumbled its release and sales plan. MapR, which offered a leading file system for Hadoop projects, clung to life before finally being rescued — if that’s the right word — by HPE, which has not had a great track record of reviving struggling software.

To get some perspective, it’s important to define exactly what Hadoop is. And that’s no simple task. It started out as a single open-source distributed data storage project to support the Big Data search tool Nutch, but since has grown into the stack that it is today, encompassing data streaming and processing, resource management, analytics and more.

Gartner analyst Merv Adrian said back when he started covering the space, the question was ‘What is Hadoop?’ Today, he said, it just might be what ISN’T Hadoop? “I had a conversation with a client that just finished a project where they used TensorFlow, a Google cloud thing for AI, and they used Spark and they used S3 storage, as it happens, because they were on Amazon but they liked the TensorFlow tool,” Adrian recounted. “And they said, ‘This is one of the best Hadoop projects we’ve done so far,’ and I asked them, ‘Why is this a Hadoop project?’ And they said, ‘Well, the Hadoop team built it, and we got the Spark from our [Cloudera] Hortonworks distribution.’ It’s some of the stuff we got with Hadoop plus some other stuff.”

Factors impacting Hadoop
How did we get to this place, where something that seemed so transformational just a few years ago couldn’t sustain itself? First and foremost, the Hadoop platform vendors simply missed the cloud. They were successfully helping companies with on-premises data centers implement distributed file systems and the rest of the stack, while Google, Amazon, Microsoft and — to a lesser degree Oracle — were building this out in the cloud. Further, open-source projects that extended or augmented the Hadoop platforms became viable options in their own right. This created complexity and some confusion.

According to Monte Zweben, co-founder and CEO of data platform provider Splice Machine, the problems were due to the growing number of components supporting Hadoop platforms, and from swelling lakes of uncurated data. “When Hadoop emerged, a mentality arose that was, to use a fancy word, specious. That mentality was that you could just dump data onto a distributed system in a fairly uncurated and sort of random way, and the users of that data will come. That has proven to not work. In the technical ranks, they call that ‘schema on read,’ meaning, ‘Hey, don’t worry about what these data elements look like, whether they’re numbers or strings. Just dump data out there in any random format and then whoever needs to build applications will make sense of it.’ And that turned out to be a disaster. And what happened with this data lake view is that people ended up with a data swamp.”

Zweben went on to say that complex componentry created a sales problem, due to how complicated they made the Hadoop distributions. “You need a car but what you’re being sold is a suspension system, a fuel injector, some axles, and so on and so forth. It’s just way too difficult. You don’t build your own cars, so why should you build your own distributed platform, and that’s what I think is at the heart of what’s gone sideways for the Hadoop community. Instead of making it easier for the community to implement applications, they just kept innovating with lots of new componentry.”

The emergence of the public cloud, of course, has been cited as a major factor impacting Hadoop vendor platforms. But Scott Gnau, vice president of data platforms at Intersystems and former CTO at Hortonworks, sees it from two sides.

“If you define Hadoop as HDFS, then the game is over … take your toys and go home,” Gnau said. “I don’t think that cloud has single-handedly caused the demise of or trouble for Hadoop vendors … The whole idea of having an open-source file system and a massively parallel compute paradigm — which was the original Hadoop stuff — has waned, but that doesn’t mean that there isn’t a lot of opportunity in the data management space, especially for open-source tools.”

Those open-source projects also have hurt the Hadoop platform vendors, providing less expensive and just as capable substitutes. “There are about a dozen or so things that all distributors have,” Gartner’s Adrian explained. “Bear in mind that in every layer of this stack, there’s an alternative. You might be using HBase but you might be using Accumulo. You might be using Storm, but you might be using Spark back then. Already, by 2017, you could also add, you might be using HDFS or you might be using S3, or rather data lake storage, and that’s very prevalent now.”

Vendors still delivering value
Still, there is much life left in the space. Adrian provided a glimpse of the value remaining there. “Let’s just take the dollars associated with what you could call the Hadoop players, even if they don’t call themselves that. In 2018, if you took the dollars for Cloudera and MapR and Google and AWS Elastic MapReduce, we’re talking about close to $2 billion in revenue representing over 4.2% of the DBMS revenue as Gartner counts it. That makes it bigger than the sum by far of all of the pure-play non-relational vendors who weren’t Hadoop. If you add up MarkLogic, MongoDB, Datastax and Kafka, those guys only add $600 million of revenue — that’s less than a third of the Hadoop space. In 2018.”

Going forward, a big future opportunity lies in helping organizations manage their data in hybrid and multicloud environments. Arun Murthy, chief product officer at Cloudera, explained, “Hadoop started off as one open-source project, and it’s now become a movement — a distributed architecture running on commodity hardware, and cloud well fits this concept of commodity hardware. We want to make sure that we actually help customers manage that commodity hardware using open-source technologies. This is why Hadoop becomes an abstraction layer, if you will, and enterprises can use it to move data and workloads better if they choose, with consistent security and governance, and you can run multiple workloads on the same data set. That data can reside on-prem, in Amazon S3, or Microsoft [Azure Data Lake Storage], and you get a consistent one plane of glass, one set of experiences to run all the workloads.”

To that end, Cloudera last month launched the Cloudera Data Platform, a native cloud service designed to manage data and workloads on any cloud, as well as on-premises. 

Murthy pointed out that enterprises are embracing the public cloud, and in many cases, more than one. They also are likely to have data they’re retaining on private servers. “IT is trying really hard to make sure they don’t run afoul of regulations, while the line of business is moving really fast, and want to use data for their productions,” he said. “This leads to inherent tension. Both sides are right. In that world, you want to make sure regardless of where you want to do this — on-prem, public cloud and the edge — today, more data is handled outside the data center than inside the data center. When you look at the use cases the line of business wants to solve — even something as prosaic as real-time billing — you want to lift your smartphone and see how much data you used. You need streaming, data transformation, reporting and machine learning.”

Another opportunity for ISVs to play the multicloud game, according to Gartner’s Adrian, who said containers are not going to do this. “Containers will let me pick something up and move it somewhere else and have it run, but it’s not going to let me govern it, it’s not going to let me manage security and policy consistently, from one place. That is one of the opportunities,” he said.

 “What Cloudera has ahead of them is a very good, relatively open field to continue to sell what we think of as Hadoop on-premises,” Adrian added, “people who already know what they’re doing, and there are lots of successful use cases that are going to grow. They’re going to sell more nodes for the people who want to be on-prem, and as for people who want to do on-prem, where else are they going to go to? They could cobble it together out of open-source pieces, which, if they haven’t done it by now, they’re not the early adapters with a strong engineering organization that’s going to do that. They’re going to want something packaged.”

As the industry moves forward, the technologies that underlie Hadoop remain, even if it won’t be known as Hadoop. 

“Far be it for me to guess what the marketing folks at these companies are going to come up with,” Intersystems’ Gnau said. “With all of the execution missteps by management teams and these companies recently, maybe they want to change their name, to protect the innocent,” he added with a chuckle. “In the end, there is a demand out there for this kind of tack, and folks who are calling it over because of the execution missteps are being a bit short-sighted.

“I’m talking about the need in the marketplace,” he continued. “I’ve got diverse sets of data created by systems or processes that are potentially outside of my control, but I want to capture and map that data into real-time decision-making. What are the tools I need to go do that? Well, provenance is one of the tools I need. Certainly, the ability to have flexibility and not require a schema for capturing, onboarding this data, because data that’s created outside of my control is going to change, the schema’s going to change, so there’s an interesting space for the toolset, regardless of what it ends up being called.”

So whatever it’s name will be, Hadoop technologies will continue to have a place in the market, no matter who’s supplying it. “I think there is a use case and a relevance for that kind of product and that kind of company,” Gnau said, “and I do think there’s a lot of confusion based on failure to execute versus validity of technology.”

The post Is the Hadoop party over? appeared first on SD Times.

via Click on the link for the full article

SD Times Open-Source Project of the Week: Titan

Data is becoming more important than ever, and developers are beginning to realize they need better ways to harness and work with data. The problem, however, is that data isn’t handled the same way development is and therefore it can become a time-consuming and complex process. 

“The rise of git, docker, and DevOps has created a new world where developers can easily build, test, and deploy right from their laptop. Despite these advances, developers still struggle to manage structured data with the same speed and simplicity. Techniques like SQL scripts, database dumps, and plain text exports still leave a lot of work for developers,” the Delphix Titan team wrote on a website

To address this, Delphix open sourced Titan earlier this year. Titan is an open-source project that enables developers to treat data like code.

“The thinking behind Titan is today the way developers develop is locally on their laptop. They pull data from their git repository, they clone that data locally on their laptop, and they go to work. What do they do for data? They are actually copying databases around and they can’t copy a commercial database around. Even if they get that data, they can’t version it. If they do testing that changes the data, then they have to get another copy and it is all a manual process. There is no git for data, and there have been several attempts to make it so we decided we would make our own,” Sanjeev Sharma, vice president and global practice director for data modernization and strategy at Delphix, told SD Times.  

Titan is not git for data, but it provides capabilities that help developers manage, version and branch databases locally on their laptops, Sharma explained. The project enables developers to clone, commit, checkout, push and pull data like code. In addition, they can rollback to a previous state, build a test data library and share structured datasets, according to the project’s website. Other features include data versioning, support for off-the-shelf Docker containers, and a command line tool.

“Setting up and tearing down databases for developers has been the bane of the dev workflow. Not only do developers have to decide WHERE and HOW to run the database but they have to struggle with the configuration,” Robert Reeves, CTO of Datical, said in a post. “Of course, containers are perfect for local development, but until Titan, applying the dev workflow to the data just didn’t happen.”

The post SD Times Open-Source Project of the Week: Titan appeared first on SD Times.

via Click on the link for the full article

Python becomes the second-most popular programming language on GitHub

Python is finally beginning to outrank Java. In the beginning of the year, TIOBE Index predicted Python would soon replace Java and that prediction is becoming a reality with the release of GitHub’s 2019 State of the Octoverse.

According to the report, Python outranked Java for the first time, becoming the second-most popular programming language on GitHub by repository contributors. JavaScript is the number one programming language, and Java is now the third most popular.

The State of the Octoverse is designed to examine how software development is growing, evolving, and connecting.

“This year, we’ve seen that software development is, more than ever, a community endeavor. The Octoverse is growing more interconnected as it becomes easier to find and build on work from others around the globe. And some of the top open source projects not only have thousands of contributors—they’re dependencies for millions of repositories,” GitHub wrote in a post that summarized the report. 

According to the report, out of the more than 40 million developers on GitHub, 80% are from outside the United States, and 10 million new developers joined in the last year. GitHub explained that it determined the country origins of contributions based on  information from the organization profiles and the most common country organization members are active in. China, India and Germany were involved in open source the most.

The open-source community also gained 10 million new contributors in 2019, had 44% more developers create their first repository, and 1.3 million made their first open-source contribution.

Looking more closely at programming languages, GitHub also found Dart and Rust were the fastest growing languages by repository contributors this year.

New top projects based on number of contributors included ‘flutter/flutter,’ ‘firstcontributions/first-contributions,’ and ‘home-assistant/home-assistant’ while ‘microsoft/vscode and ‘ansible/ansible’ have been on the top list since 2016.  

“People are doing all sorts of amazing things on GitHub, from reimagining robots to detecting diseases. Whether developers play-tested a game or trained an algorithm, we’ve found they’re more productive than ever this year,” GitHub wrote. 

Other findings of the Octoverse included: more than 7.6 million vulnerability alerts were remediated, Jupyter Notebooks usage has seen more than 100% growth year-over-year, and TensorFlow’s contributors grew from 2,238 to 25,166 people globally.

The post Python becomes the second-most popular programming language on GitHub appeared first on SD Times.

via Click on the link for the full article

The demise of Flash

There once was a time where Adobe Flash was the obvious choice for building rich web applications, but as the Internet began to grow and technology started to advance, Flash slowly started to die out. 

Over the last couple of years more and more businesses have announced that they will no longer support Flash. There was even an Occupy Flash movement to “rid the world of the Flash Player plugin.” 

“Flash Player is dead. Its time has passed. It’s buggy. It crashes a lot. It requires constant security updates. It doesn’t work on most mobile devices. It’s a fossil, left over from the era of closed standards and unilateral corporate control of web technology. Websites that rely on Flash present a completely inconsistent (and often unusable) experience for fast-growing percentage of the users who don’t use a desktop browser. It introduces some scary security and privacy issues by way of Flash cookies,” the Occupy Flash website stated. “Flash makes the web less accessible. At this point, it’s holding back the web.”

In addition, Flash has become expensive to maintain and support due to crashes and bugs, and its ability to provide rich content has been diminishing over the years as more modern approaches have come to the surface. 

The beginning of the end of Flash can be traced back to 2007 with the release of the iPhone, which came equipped with a mobile Internet browser — but did not support Adobe Flash. Fast forward to 2010, Steve Job released an open letter on his thoughts about Flash. 

“I wanted to jot down some of our thoughts on Adobe’s Flash products so that customers and critics may better understand why we do not allow Flash on iPhones, iPods and iPads. Adobe has characterized our decision as being primarily business driven – they say we want to protect our App Store – but in reality it is based on technology issues. Adobe claims that we are a closed system, and that Flash is open, but in fact the opposite is true,” Jobs wrote. 

Apple began working on its own open standards, such as the open-source project WebKit which provided a HTML5 rendering engine. “New open standards created in the mobile era, such as HTML5, will win on mobile devices (and PCs too). Perhaps Adobe should focus more on creating great HTML5 tools for the future, and less on criticizing Apple for leaving the past behind,” Jobs wrote. 

Over the years, HTML5 has matured and become more advanced and other open standards such as WebGL and WebAssembly have emerged to enable companies to bypass the Flash plugin and use modern technology to add rich features to their web solutions. These standards are more favorable than Flash because they are available natively inside a browser, instead of being a browser hack. 

Adobe itself has moved on from the Flash brand. It announced the release of Adobe Animate CC in 2015, which supports HTML5, WebGL and Scalable Vector Graphics. “Over time, we’ve seen helper apps evolve to become plugins, and more recently, have seen many of these plugin capabilities get incorporated into open web standards. Today, most browser vendors are integrating capabilities once provided by plugins directly into browsers and deprecating plugins,” the company wrote in a post.  “Given this progress, and in collaboration with several of our technology partners – including Apple, Facebook, Google, Microsoft and Mozilla – Adobe is planning to end-of-life Flash.”

Adobe is expected to stop distribution of the Flash Player by the end of 2020. 

The post The demise of Flash appeared first on SD Times.

via Click on the link for the full article

Saving Flash from extinction

Flash is quickly approaching the end of its life. Adobe plans to halt updates and distribution by the end of 2020, and encourages any content creators to migrate their existing Flash content to new open formats like HTML5, WebGL, and WebAssembly. 

“Several industries and businesses have been built around Flash technology – including gaming, education and video – and we remain committed to supporting Flash through 2020, as customers and partners put their migration plans into place,” Adobe wrote in a blog post when it announced its plans in 2017. 

RELATED CONTENT: Web development: So many choices to get the right fit

The problem, however, is not everyone is ready to say goodbye to Flash. 

“Even though the openness and accessibility of web standards is the best way to go, the ease of use and the joy of creativeness with Flash is something I still miss now and then,” Juha Lindstedt, web architect who created a petition to open-source Flash, told SD Times. “There was a movement around cool Flash sites, developers almost competing who would make the coolest art piece or website.” 

Lindstedt does acknowledge that the Internet was very different back then. He explained when Flash was at its most popular, the Internet was still new and developers were using Flash to create almost pieces of artwork to showcase their talent. Today’s Internet is about being more social and connected. “I would compare Flash to music videos,” he said. “Back in the music videos’ golden era, they used to be art pieces, sometimes even separate short films. MTV was showcasing the best music videos and FWA [Favorite Website Awards] was similar for Flash projects,” he said. 

Open web technologies are becoming the default choice when creating web experiences because they are fast and more power-efficient. “They’re also more secure, so you can be safer while shopping, banking, or reading sensitive documents. They also work on both mobile and desktop, so you can visit your favorite site anywhere,” Anthony Laforge, product manager of Google Chrome, wrote in a blog post when Google announced its own plans to remove Flash from Chrome. As of July 2019, Flash has been disabled in Chrome by default. 

Despite the general agreement that Flash can’t keep up with today’s user demands and experiences, most would agree that throughout its life Flash has been the foundation for many web skills. “For 20 years, Flash has helped shape the way that you play games, watch videos and run applications on the web,” wrote Laforge. 

Because of this, there are many efforts looking to keep Flash around in some capacity. “Flash is an important piece of the Internet’s history. We need to somehow preserve those interactive art pieces,” said Linstedt.

Preserving Flash content
One effort to preserve, or reimplement Flash was the Gnash project, an open-source Flash alternative that lived under the Free Software Foundation banner. “Unfortunately, Gnash soon fell behind Adobe’s player in terms of features,” Alessandro Pignotti,  founder and CTO of Leaning Technologies, a compile-to-JavaScript and compile-to-WebAssembly tools provider, and author of the open-source Flash player Lightspark, wrote in a post

Other efforts, in addition to Linstedt’s petition to open-source Flash, include Shumway, an HTML5 technology experiment; Ruffle, a Flash Player emulator written in Rust; and Lightspark, a C++ implementation of the Flash player, 

“If there is one lesson that I learned from working on Lightspark, it is that reimplementing Flash is a very, very, very hard and time-consuming task,” wrote Pignotti. “That’s why I am certain that the only practical, robust way to accurately preserve Flash content in the medium and long term is not through a reimplementation.”

Along with his team at Leaning Technologies, Pignotti recently announced CheerpX, a new technology designed to run unmodified x86 binaries in a browser using WebAssembly. 

“There is so much of Flash content [out there]. We are talking thousands, tens of thousands of content and that is if you are only looking at video games. But that is not the only type of content. Over the years there has been a lot of enterprise content, dashboards, logistics, traditional databases and front-end solutions written in Flash or in various frameworks built upon Flash like Flex,” Stefano De Rossi, founder and CEO of Leaning Technologies, told SD Times. 

With no intervention, CheerpX allows users to enjoy Flash applications, play Flash video games in a browser, and run legacy, enterprise-grade Flash content that don’t have the means or resources to be rewritten from scratch in a modern technology.

“It is about preservation of all the content, and also about migration and modernization of existing legacy apps,” said De Rossi. 

CheerpX is still in a prototype phase, but is currently capable of fully executing unmodified apps, the team explained. Going forward, the plan is to directly download the Flash binary over HTTP, write a native x86 host application, and provide access to browser resources. 

“As you have probably guessed by this point, our solution to preserve Flash in the long term is to run the full, unmodified, Flash plug-in from Adobe in WebAssembly,” wrote Pignotti

“Long live Flash!” Lindstedt added. 

The post Saving Flash from extinction appeared first on SD Times.

via Click on the link for the full article

SD Times news digest: Google open sources Cardboard, the Microsoft Cloud Adoption Framework, and Code42’s threat detection capabilities

Google open sourced its Cardboard project that lets developers create VR experiences across Android and iOS devices. 

“We think that an open source model—with additional contributions from us—is the best way for developers to continue to build experiences for Cardboard,” Google wrote in a blog post. “We’ve already seen success with this approach with our Cardboard Manufacturer Kit—an open source project to enable third-party manufacturers to design and build their own unique compatible VR viewers—and we’re excited to see where the developer community takes Cardboard in the future.”

The open source project includes APIs for head tracking, lens distortion rendering, input handling and an Android QR code library, so that apps can pair any Cardboard viewer without depending on the Cardboard app.

Microsoft announced the Microsoft Cloud Adoption Framework
Microsoft announced a cloud adoption framework for Azure that includes Innovate and Manage stages as well as new resources and assessments to aid organizations through their cloud migration. 

The Cloud Adoption Framework allows organizations to break down their journey into stages for business decision-makers, cloud architects and IT professionals. 

The framework breaks the process down into the following stages: strategy, plan, ready, adopt, govern and manage. 

The full details on the framework are available here.

Code42 announces threat detection capabilities
Code42 announced threat detection capabilities with new integrations into cloud-based email platforms. 

“Organizations can now more easily and rapidly detect and investigate suspicious data movement through one of the most widely-used data exfiltration vectors, cloud-based email. These capabilities can turn hours of investigative work into a task of a few minutes and yield a level of granularity into file contents and detail that security teams have been unable to achieve until now,” said Rob Juncker, Code42 SVP of product, research, operations and development.

Looker 7 announced for infusing data experiences into workflows
Looker announced its new Looker 7 platform that infuses customizable data into day-to-day tools and workflows to close the gap between discovering insights and taking action. 

The new platform contains a development framework, an in-product Marketplace to quickly find and deploy add-ons and a suite of development tools that includes new SDKs and a Developer Portal. 

Also, the business intelligence experience includes new dashboards, closed-loop Slack integration, new third-party integrations, a new analytical alert and SQL runner visualizations. 

The full details on the new platform are available here.

The post SD Times news digest: Google open sources Cardboard, the Microsoft Cloud Adoption Framework, and Code42’s threat detection capabilities appeared first on SD Times.

via Click on the link for the full article

Dart 2.6 released with dart2native

Google has announced the latest release of its programming language Dart. Version 2.6 comes with dart2native, an extention of its existing compiler with the ability to compile Dart programs to self-contained executables containing ahead-of-time compiled machine code.

According to the company, this will enable developers to create tools for the command line on macOS, Windows or Linux using Dart, which was previously only available for iOS and Android mobile devices.

The self-contained executables can run on machines that don’t have the Dart SDK installed and can start running in just a few milliseconds. Meanwhile, the same set of core libraries are available in Dart when compiling to native code, Google explained.

The updated version also includes interoperability with C code via ‘dart:ffi’; and it kilo, a 7MB code editor written in less than 500 lines of Dart code.

“By compiling your service’s code ahead-of-time to native code, you can avoid this latency and begin running right away. In addition, with native code you can create Dart services that have a small disk footprint and are self-contained, greatly reducing the size of the container in which the Dart service runs,” the Dart team wrote in a blog post

According to Dart developer Paul Mundt, he was able to reduce the size of his Docker image by more than 90% by using native code. 

This initial version of the dart2native compiler has a few known limitations including no cross-compilation support, meaning that the compiler creates machine code only for the operating system that it’s running on; no signing support; and no support for ‘dart:mirrors’ and ‘dart:developer.’

The Dart team also said that the updated version contains a preview of an extension methods language feature, which the team said will be polished in the next SDK version. 

The post Dart 2.6 released with dart2native appeared first on SD Times.

via Click on the link for the full article