New report shows shakeup amongst top programming languages

JavaScript and Java continue to dominate the software development world as the top programming languages, but newly released reports indicate the times are changing. JetBrains released its State of Developer Ecosystem 2019 report, which found while Java is still the most popular primary language and JavaScript is the most used overall, Python is gaining speed.

RELATED CONTENT: The rise of Kotlin

The report surveyed about 7,000 developers worldwide, and revealed Python is the most studied programming language, the most loved language, and the third top primary programming language developers are using.

Compared to other recent language reports and analysis, TIOBE found Python had surpassed C++ and entered the top three programming languages in its TIOBE Index, and Stack Overflow’s developer report found Python was the fastest-growing major language and the second most-loved language. Additionally, Stack Overflow reported Python is the most wanted language among developers.

“For the seventh year in a row, JavaScript is the most commonly used programming language, but Python has risen in the ranks again. This year, Python just edged out Java in overall ranking, much like it surpassed C# last year and PHP the year before,” Stack Overflow wrote in its analysis.

TIOBE also predicts Python will surpass Java as the top language in the next couple of years.

The top use cases developers are using Python for include data analysis, web development, machine learning and writing automation scripts, according to the JetBrains report. More developers are also beginning to move over to Python 3, with 9 out of 10 developers using the current version.

The JetBrains report also found while Go is still a young language, it is the most promising programming language. “Go started out with a share of 8% in 2017 and now it has reached 18%. In addition, the biggest number of developers (13%) chose Go as a language they would like to adopt or migrate to,” the report stated. Compared to Stack Overflow’s survey, Go was the third most-wanted programming language behind Python and JavaScript.

Other key findings from JetBrains’ State of Developer Ecosystem included:

  • Seventy-three percent of JavaScript developers use TypeScript, which is up from 17 percent last year
  • Seventy-one percent of Kotlin developers use Kotlin for work
  • Java 8 is still the most popular programming language, but developers are beginning to migrate to Java 10 and 11

The post New report shows shakeup amongst top programming languages appeared first on SD Times.

via Click on the link for the full article

SD Times news digest: TensorFlow 2.0 beta, WSL2 for Windows insiders, and TIBCO’s new cloud-native offerings

TensorFlow, the open-source machine learning library for research and production, announced the beta version of its upcoming 2.0 release.

According to the team, it has completed renaming and deprecating symbols for the 2.0 API for the beta release. It also includes 2.0 support for Keras features like model subclassing, a simplified API for custom training loops and distribution strategy support for most kinds of hardware

Core components such as TensorBoard, TensorFlow Hub, TensorFlow Lite, and TensorFlow.js work with the beta, while support for TensorFlow Extended (TFX) components and end to end pipelines is still in progress.

According to a post by the company, TensorFlow will complete Keras model support on Cloud TPUs and TPU pods, work on performance, and close issues for the final release. More information on what to expect in 2.0 is available here

UiPath to train more than 750,000 users on Robotic Process Automation
Robotic Process Automation (RPA) provider UiPath is committed to training 750,000 Americans to work with RPA over the next 5 years through UiPath Academy, its free online training program.

The Academy offers courses that educate professionals for roles like RPA developer, solution architect, infrastructure engineer, implementation manager and business analyst.

“Business automation will have far-reaching impacts on the way we work, and we believe we have an obligation to America and the rest of the world to support the future of work,” said Daniel Dines, CEO and co-founder of UiPath. “We have pledged 80 percent of all the trainings committed to Pledge to America’s Workers.”

The White House Pledge to America’s Workers includes more than 200 organizations like Salesforce, Amazon and Deloitte.

TIBCO announces new cloud-native offerings to address developer challenges
TIBCO announced new and enhanced capabilities for its TIBCO Cloud Integration, Mashery, and Events offerings to tackle challenges developers face when working with clou-native applications, according to the company.

The latest enhancements include native support for GraphQL, increased application responsiveness that now supports event-driven patterns and a new open source Project Flogo streams designer that offers a web designer.

“Developers have access to an unprecedented array of innovative technologies and powerful cloud compute right now, which empowers them to create new possibilities for customers,” said Rajeev Kozhikkattuthodi, vice president, product management and strategy at TIBCO. “TIBCO helps create that wave by streamlining application development and increasing developer productivity to build incredible customer experiences.”

WSL2 now available to Windows Insiders
The newly released Windows Subsystem for Linux 2 now runs in a virtual machine and includes new WSL commands.

Users will now have to use that VM’s IP address to access Linux networking applications in Windows and vice versa for accessing Windows networking applications from Linux.

New WSL commands allow users to convert a distro to use the WSL 2 or WSL 1 architecture; change the default install version for new distributions; and terminate, list and show information about all running distributions. The new commands can be viewed in detail here.

Microsoft also recommends putting the files that are frequently used with Linux applications into the Linux root file system instead of on the C drive to experience the file performance benefits.

The post SD Times news digest: TensorFlow 2.0 beta, WSL2 for Windows insiders, and TIBCO’s new cloud-native offerings appeared first on SD Times.

via Click on the link for the full article

SD Times Open-Source Project of the Week: Stencil

Since Stencil One’s release just last week, developers can use the compiler to generate standards-compliant Web Components, while also delivering concepts from popular frameworks into a build-time tool.

According to Ionic, the creator of Stencil, stencil takes features such as async rendering, reactive data-binding, Typescript and JSX and generates web components with all the features included.

“Compared to using Custom Elements directly, Stencil provides extra APIs that makes writing fast components simpler,” Adam Bradley, co-creator and lead developer of Ionic Framework and Stencil, wrote in a post.” APIs like JSX and async rendering make fast, powerful components easy to create, while still maintaining compatibility.”

This solves the problem in which Ionic components were only compatible with Angular. Because Stencil standards-compliant web components, it can now be used with frameworks such as React, Vue and Angular.

“Web Components allow Ionic to use web-standard APIs already built into all modern web browsers, rather than framework-specific APIs that are version-restricted and may change over time,” Bradley wrote.” Essentially, this enables Ionic components to be created and connected to the DOM, no different than any other element.

In addition, the bundle sizes for Stencil apps were incredibly small, according to the company. For example, the TodoMVC Gzipped bundle size amounts to 2.0KB, with Svelte second in line at 3.6KB and a much larger 39.1KB for React. This is because Stencil is able to statically analyze the source of every component, allowing it to apply heavy optimizations and include only what the components require, Bradley explained

The post SD Times Open-Source Project of the Week: Stencil appeared first on SD Times.

via Click on the link for the full article

premium Man and machine learning: Data projects and the opportunities for developers

As businesses increasingly move their operations to the cloud, they’re recognizing the potential to harness the almost limitless compute power available and tap into artificial intelligence and machine learning technologies to deliver insights and value to the business that were previously beyond their reach.

Businesses have never been in a better position to create value from the vast amounts of data they hold. Developers with the skills and knowledge to unlock this value are therefore in a prime position. But how should businesses approach such projects? Here are four tips for development teams and data scientists who want to help firms bring this value to market.

1. Agree on the use case
It’s imperative to be clear on the objectives for any AI project upfront. Use cases for AI fall into three main areas. Firstly, there are projects designed to improve customer engagement and serve up personalized recommendations to customers. Secondly, business analysis projects optimize processes and support decision-making. And thirdly, operational AI projects – using AI to digitize entire processes to deliver increased efficiency, reduced costs, and other savings.

Being clear about the scope of the project and how success will be measured is paramount. Targets could include a metric to reduce processing failures, to reduce the timeframe for a specific process, or to increase revenues by a certain percentage.

I’d recommend starting small, perhaps with one team in one geography. Proving the use case works in a particular scenario can allow initial success to be quickly demonstrated. The scope and sale of the project can then be gradually expanded – with the business value measured at every stage. This approach also allows for ‘fast failure’ so that if something isn’t working, resources can be re-directed and the team can start again.

2. Get your Agile game on
If data projects are to succeed once use cases are established, the right teams must be assembled. In my experience, Agile Scrum teams are the most effective. Take a nine-person team as an example. The breakdown of core disciplines should be as follows:

Firstly, a business analyst (BA) must take charge of establishing the use case that will be achieved with the project, understanding the ideation around it and feeding this back to the rest of the team. Through this process, clear objectives can be set, particularly relating to key results for the client, but also what is achievable with sprints on the development side.

Next, and perhaps the most important, is the data scientist. In the scenario set out above, four would be the optimum number – and this is by no means an overrepresentation. As with any data project, 70% to 80% of the work to be done involves cleaning and arranging the data such that it can be used to bring about the use case agreed at the start. Furthermore, unlike regular software products that are built once and then deployed, data projects demand continuous deployment due to the dynamic nature of data.

Machine learning engineers make up two members of the Scrum team and will be responsible for building the data pipeline, and lastly, two QA members, with specific knowledge of the use case agreed upon at the start, should complete the team.

3. Use the right data
One of the major concerns of any data project is data sensitivity. AI and machine learning algorithms need significant amounts of data to produce good results; the more data, the better the results. But there are of course limitations on the types of data that can legitimately be used.

Regulations and privacy concerns are the biggest issues to contend with. Where a data set contains private information that can provide significant value for machine learning, it’s essential to approach this in the right way. This could include anonymizing sensitive data before running the analysis.

Given that data is ever changing, and that data projects follow a process of continuous delivery, the best way to validate a use case is to start small. Once the scope of a data project is validated it can then be rolled out more widely, constantly expanding but always scaled to achieve the key objectives set from the start.

Scaling up can change the context of the data, as will dealing with different customers. It might be possible to build a very accurate model for one customer, but the same model may perform poorly for another. So, the model must be changed and run accordingly, then maintained once deployed. This is one of the key differences of data projects; 60% of the work follows deployment, largely due to maintenance requirements.

This is an issue that often leads to timeframes expanding beyond initial targets. In a regular development project, you can predict with some degree of accuracy how long it will take to deliver the end product, as there is a clear understanding of the software. When it comes to data projects, uncertainty should be expected, as the more data that is gathered, the higher the risk that the overall context will change.

Transparency is key. Being open about the nature of data projects from the outset will help to maintain a good relationship with the customer. Bringing them into the process early and piloting the solution as outlined above, will reduce the risk of surprises down the line. As long as there is a clear commitment to solving the problem you agreed to solve, friction can be avoided.

4. Take to the cloud
Data-minded developers are in an era that is entirely theirs to own. Open-source tools such as TensorFlow, and cloud platforms such as Microsoft Azure, Google, AWS and Alibaba, are providing strong support for AI and machine learning projects. In my experience, developers working with DevOps tools and techniques are the most adept at creating value from data propositions, as they are most familiar with open-source tools and increased automation, as well as the cloud platforms that marry the two.

These platforms offer major advantages when it comes to data projects. To train machine learning, massive infrastructure is required. A graphics processing unit that enables deep learning, for example, can be very expensive to buy and operate, whereas cloud platforms can provide the same capabilities for a fraction of the price.

So, the time is right for developers and data scientists with the knowledge and skills demanded by data projects to bring new value to businesses. The pressure on organizations to innovate at pace has never been greater, and data – when used effectively – can deliver this like never before.

The post <span class="sdt-premium">premium</span> Man and machine learning: Data projects and the opportunities for developers appeared first on SD Times.

via Click on the link for the full article

SD Times news digest: .NET Core 3.0 Preview 6, Kintone’s new mobile app, and the cost of data breaches in 2018

.NET Core Preview 6 is now available on Windows, macOS and Linux. The preview includes updates for compiling assemblies for improved startup, optimizing applications for size with linker and EventPipe improvements, according to Microsoft.

Also, the WPF team explained it has now published most of its WPF codebase to GitHub since the company acquired GitHub late last year.

“We are now getting very close to being feature complete for .NET Core 3.0, and are now transitioning the focus of the team to the quality of the release,” Richard Lander, the program manager at .NET team, wrote in a blog post.

Kintone announces newly redesigned mobile app for no-code platform users
No-code application development platform provider Kintone announced that it redesigned its mobile app.

The improvements include a new interface, better search and navigation capabilities, more intuitive workflows and updated notifications.

“As Kintone is ‘Mission Control’ for many of our clients, we understand that they need access to their data and critical operations no matter when or where they are,” said Dave Landa, CEO of Kintone. “Our new mobile app supports the growing need for workplace flexibility to work anytime, anywhere.

ForgeRock report: Data breaches cost $654 million in 2018
A report by Forgerock shows that data breaches cost $654 billion in 2018 and exposed 2.8 billion consumer records in the U.S.

Personally identifiable information (PII) was by far the biggest target, comprising a whopping 97% of all data breaches.

This year doesn’t look optimistic as the report also found that with the $6.2 billion in cyberattack damage on financial services in Q1 2019 alone was drastically higher than the $8 million in damages from the same quarter last year.  

Automation Anywhere teams up with Microsoft on advanced intelligent automation
Robotic Process Automation (RPA) provider Automation Anywhere is teaming up with Microsoft to advance intelligent automation.

Together, the companies will now enable joint product integration by infusing Automation Anywhere bots into Azure, as well as co-selling and joint marketing to benefit mutual customers.

“The combination of Automation Anywhere and Microsoft Azure creates an intelligent digital workplace, where software bots operate with maximum efficiency and accuracy enabling enterprises to unlock the value of intelligent automation in the cloud,” said Frank Della Rosa, the IDC research director of SaaS and cloud software.

The full report can be viewed here.

The post SD Times news digest: .NET Core 3.0 Preview 6, Kintone’s new mobile app, and the cost of data breaches in 2018 appeared first on SD Times.

via Click on the link for the full article

Six steps for making a successful transition to a cloud-native architecture

Cloud native has become one of the biggest trends in the software industry. It has already changed the way we think about developing, deploying and operating software products. The cloud-native paradigm for application development has come to consist of microservices architecture, containerized services, orchestration and distributed management.

Organizations across every industry want to remain competitive, and there is a strong sense of urgency to adapt quickly, or become irrelevant. The pressing need is to secure the right amount of infrastructure flexibility and performance elasticity to manage unpredictable usage volume and geographic dispersion. Many companies are already on this journey, with varying degrees of success.

RELTED CONTENT:
Understanding the meaning of cloud-native apps and development
Big bang theory? Not for cloud-native development

A recent Cloud Foundry survey of approximately 600 IT decision makers revealed more than 75 percent are evaluating or using Platforms-as-a-Service (PaaS), whereas 72 percent are evaluating or using containers. Nearly half (46 percent) are evaluating or using serverless computing. Notably, more than one-third are employing some combination of all these technologies, and it’s in those companies using all three technologies that cloud-native computing is gaining momentum.

Adopting cloud-native architecture is much more than merely moving some workload over to a public cloud vendor. It is an entirely new and different approach to building infrastructure, developing applications and structuring your teams. Below are six steps enterprises must take to ensure a successful transition.

1. Plan to transition to cloud native

The first step in a successful transformation is to make a plan. Many organizations don’t move in the right direction because they begin with the technology. While new technology can be exciting, it can also be daunting. Otherwise, highly beneficial technology can be misused to the point of frustration and abandonment.

At the outset, it’s critical to involve your leadership, partners and customers. Present your findings and high-level plans. Assemble the right team and work together to divide your cloud native journey into phases. Then, break these phases into development projects, sprints and actions. Set clear expectations and frequently collect feedback. Ultimately, both the leadership and the engineering team must be aligned on both the business goals and the key results that the organization hopes to achieve in the short and long term by initiating a transition. Without this mutual understanding, the engineering team risks to prematurely optimize the architecture for use-cases that are irrelevant to the business.

Resist the temptation to pursue the technology before you align your business mission, vision and people with your cloud native aspirations.

2. Transition away from silos to DevOps

Despite the prevalence of Agile methodology, application development is still commonly organized into the following silos: software development, quality assurance and testing, database administration, IT operations, project management, system administration, and release management.

Typically, these silos have different management structure, tools, methods of communication, vocabulary and incentives. These differences correspond to disparate views regarding the mission and implementation of the application development effort.

DevOps is both a methodology and an organizational structure. It aims to break silos open and build a common vocabulary, shared toolsets and broader channels of communication. The goal is to cultivate a culture that intensely focuses on frequent releases of high-quality deliverables. DevOps replace heavy procedures and unnecessary bureaucracy with autonomy and accountability.

3. Move from Waterscrumfall to Continuous Delivery

Today, many Agile teams find themselves immersed in what Dave West calls the waterscrumfall. Yes, it’s good to embrace Agile principles. Too often, however, the organization does not. On many Agile teams, the result of each iteration is not actually a production-grade deliverable. Incidentally, this is the original intent of the Agile Manifesto principle of working software.

What is more common is that the new code is merely a batch that gathers together with other batches downstream. This closely resembles the conventional waterfall model. This apparent reversion to conventional development actually diminishes two key benefits of Agile delivery. Firstly, customers go several weeks without seeing any addition to the value of the application under development. Secondly, the development team endures the same period of time without receiving any truly valuable feedback.

To develop cloud-native apps and realize the benefits of cloud-native architectures, it’s necessary to make complete a shift to continuous delivery (CD). In CD, application changes are deployed automatically—several times a day.

4. Decompose your monolith

Conventional multi-tier monolithic applications are rarely found to function properly if they are moved into the cloud. This is because such a move is usually made with several major, unsupportable assumptions about the deployment environment.

Another inhibitor is that a monolith deployment is closely bound to a static, enduring infrastructure. This is largely incompatible with putative cloud computing expectations for an ephemeral and elastic infrastructure. Since cloud infrastructure doesn’t provide good support for monoliths, it’s necessary to make a plan for breaking a monolithic application into components that can live happily in the cloud.

5. Design a collection of services

In essence, a cloud-native architecture is commonly seen to be a service-based architecture. Optimally, cloud-native applications should be deployed as a collection of cloud services or APIs.

However, while the concepts are readily understood, many developers still have a strong tendency to create tightly coupled applications. Such apps align and bind tightly with the user interface. To leverage cloud-computing assets and benefits effectively, a cloud-native application should expose supporting functions as services that are independently accessible.

When developing an application architecture for the cloud, it must be built to interact with complex, disparate, widely distributed systems. These systems can support multiple loosely coupled applications. Such apps are built to employ many services and also remain decoupled from the data. Developers can build up from the data and use it in communicating with services. These services can be combined into composite services—and composite applications—that remain flexible and scalable.

6. Decouple and decompose the data

It’s not enough to simply decompose monolithic applications into microservices. It’s also essential to decouple the data model. If a development team is given the freedom to be “autonomous” yet must still contend with a single database, the monolithic barrier to innovation remains unmoved.

If the data has been tightly bound to an application, it can’t find a good home in the cloud. Think about it: it’s necessary to decouple the data for the same reasons we know it’s best to decompose application functions into services. The effort to decouple the data will be richly rewarded with the ability to store and process the data on any cloud instance.

Moving to a cloud-native architecture will include time-consuming challenges requiring diligence and dedication. It’s not simply getting apps to run in a cloud-computing environment. Cloud native demands major changes in the supporting infrastructure and a shift to designing apps around microservices. In addition, foundational change requires new tools for cloud-native operations.

But the long-term gains will be extraordinary. Enterprises can go from idea to app in the shortest amount of time. No other app development paradigm is more efficient. This is one of the smartest investments an enterprise can pursue.

To learn more about containerized infrastructure and cloud native technologies, consider coming to KubeCon + CloudNativeCon San Diego, Nov. 18-21.

The post Six steps for making a successful transition to a cloud-native architecture appeared first on SD Times.

via Click on the link for the full article

SD Times news digest: Amazon Personalize, Testim raises $10 million, SwiftStack’s big data analytics solution and IBM’s AutoAI

AWS announced its machine learning technology Amazon Personalize is now available. This brings the technology used in Amazon’s website AWS to developers to incorporate into their own applications.

According to the company, developers can take advantage of product recommendations, individualized search results and customized direct marketing, while Amazon handles the infrastructure and machine learning pipeline.

“These artificial intelligence services, like Amazon Personalize, do not require any machine learning experience to immediately train, tune, and deploy models to meet their business demands,” said Swami Sivasubramanian, vice president of machine learning at Amazon Web Services, Inc.

Testim raises $10 million for its AI based software testing solution
AI-based software testing company Testim announced that it received $10 million in Series B funding, bringing their total capital to $19.5 million.

In a press release, Testim explained it will use the funds to invest in its AI-software testing solution to address the demand for continuous testing “allowing development teams to move at the speed of business without compromising software quality.” It also plans to invest in its mobile app test automation platform, which is in early access.

“To remain competitive, software teams must move faster than ever,” said Oren Rubin, founder and CEO of Testim. “We are helping them test more with much less effort, reducing their release risk and increasing their velocity to market.”  

SwiftStack announces new big data analytics solution for hybrid and multi cloud
SwiftStack, providers of multi-cloud data storage and management, announced a data analytics solutions that boasts up to 10 times higher performance, according to the company.

The solution is built for data-driven workloads using popular frameworks and applications like Hadoop, Spark, Presto, TensorFlow, and Hive and enables users to create an AI/ML data pipeline.

“We’re seeing an ever-increasing demand to extract value from data with AI and analytics workloads, and bringing the data closer to compute anywhere, with high performance and low cost is consistently becoming a challenge for enterprises,” said Dipti Borkar, Alluxio vice president of product and marketing at SwiftStack. “SwiftStack’s data analytics solution solves this problem by providing a cost effective yet high performance and rich alternative to power modern data-intensive workloads.”

IBM adds AutoAI to Watson
IBM added a new AutoAI capability to Watson Studio on IBM Cloud that aims to speed up data processes through automation. IBM said that this new capability will free up time for data scientist to work on deploying ML models.

“We have seen that complexity of data infrastructures can be daunting to the most sophisticated companies, but it can be overwhelming for those with little to no technical resources,” said Rob Thomas, general manager of IBM Data and AI. “The automation capabilities we’re putting Watson Studio are designed to smooth the process and help clients start building ML models and experiments faster.”

In addition, AutoAI contains a suite of model types for enterprise data science, such as gradient boosted trees, and is engineered to let users quickly scale ML experimentations and deployment processes, IBM explained in a post.

Apollo raises $22 million for GraphQL-based data graph
GraphQL API technology provider Apollo raised $22 million in growth funding.

The company plans to utilize the money to advance its Data Graph Platform, which allows app developers to build a data graph on top of their company’s existing APIs.

“We need to do more to support and empower app developers,” said Geoff Schmidt, co-founder and CEO of Apollo. “Our goal is for every company in the world to run on a data graph so that app developers can spend their time building great things for the rest of us.

The post SD Times news digest: Amazon Personalize, Testim raises $10 million, SwiftStack’s big data analytics solution and IBM’s AutoAI appeared first on SD Times.

via Click on the link for the full article

Google moves forward with 64-bit operating system requirements

Google is continuing its effort to supporting only 64-bit operating systems and applications. The company first announced its plan at the end of last year, but as the timeline moves closer it is giving developers new updates so they can prepare.

According to the Android team, with its recent Project Marble efforts the team has been able to provide new features and performance improvements to its Integrated Development Environment. Because of that, the team believes only supporting 64-bit operating systems and applications will provide a smoother developer experience when working with the IDE and Android Emulator.

RELATED CONTENT: Google prepares developers for 64-bit app requirements

The first step will be to deprecate the 32-bit versions of Android Studio and Android Emulator. During this process, the products will continue to work, but will not receive any new updates. After one year, the company plans to officially end product support and remove the 32-bit versions as well as download links. At this time, for developers still using the 32-bit version, the products should still work, but there will not be re-download links available.

Android Studio IDE 3.6 deprecation will start on December 31, 2019, and end of support will be on December 31, 2020. Android Emulator 28.0.25 will deprecate on June 30, 2019 and end of support will be on December 31, 2020.  

Sam Lin, product manager for Android, explained the benefits of a 64-bit development environment includes the ability to perform better with more access to memory; ability to build 64-bit versions of apps using C/C++ native code; and ability to test easier on emulators.

“Before ending support for the 32-bit version of Android Studio, we want to inform you in advance, provide guidance, and allow for a one-year lead time to help you migrate to a 64-bit operating system. You can still use 32-bit versions of Android Studio, but be mindful that these version will not receive future updates. Therefore, if you want to migrate we suggest you start planning early so that you can continue to get the latest product updates and take advantage of the performance improvements of a 64-bit development environment,” Lin wrote in a post.

The post Google moves forward with 64-bit operating system requirements appeared first on SD Times.

via Click on the link for the full article

CloudBees acquires Rollout to bring feature management to DevOps

CI/CD software provider CloudBees wants to help developers release software with more flexibility and less risk with the acquisition of the secure feature management company Rollout.

“Our goal is to help organizations deliver great, feature-rich software efficiently while minimizing risks associated with the deployment process,” said Sacha Labourey, CEO and co-founder of CloudBees. “The acquisition of Rollout gives CloudBees customers the flexibility to decouple features from software versions. Using Rollout on their trusted enterprise platform allows developers to test and merge changes with more confidence than ever before.”

With Rollout, developers can control both the roll out and roll back of features instantly on any platform, including mobile, regardless of deployment restrictions, according to the company.

Rollout provides the ability to configure each aspect of feature flags via YAML files stored and version controlled right alongside the application source code.

According to Ben Williams, the vice president of products at CloudBees, Rollout works in three steps:

  1. Define a set of feature flag objects in an application’s code.
  2. Register an application with the Rollout dashboard, and add the App Key to the application’s code base.
  3. Run the application.

“New features or fixes can be rolled out to specific target customers or customer segments to gather focused feedback or to help address issues faced by specific customers in advance of wider availability,” Williams wrote in a blog post. “You can even securely give control of this to non-developer users to remove yourselves from being a bottleneck in the process.”

Williams went on to explain that this acquisition is just another step in the company’s mission to simplify the lives of developers and cater to their delivery and deployment needs. “Whether you are creating your own solo projects, are part of a nascent startup, or are contributing to the products in organizations with thousands of developers – we got you,” he added.

Going forward, users can expect greater investment in Rollout and benefit from CloudBee’ expertise when it comes to DevOps. Rollout’s co-founders will remain a part of Rollout.

“Feature management has a huge opportunity ahead of it, it is one of the simplest yet powerful mechanisms you can have in your toolbox. Long term, we believe feature management is not only going to change software delivery but also software as a whole, making software adaptive to its customers,” Rollout’s co-founders Erez Rosovsk and  Eyal Keren wrote in a blog post.

The post CloudBees acquires Rollout to bring feature management to DevOps appeared first on SD Times.

via Click on the link for the full article

Angular previews next-generation compilation and rendering pipeline Ivy

Now that Angular 8 has been released, developers can opt-in and try the preview version of Ivy. The goal of Ivy is to make Angular smaller, easier to debug and faster to compile. The team has been working on Ivy for more than a year now.

RELATED CONTENT:
Angular 8 released with builder APIs and web worker support
Angular lays out plan for 8.0 release featuring Ivy preview

The opt-in preview of Ivy will give developers a chance to see how their application will work and provide feedback to any necessary changes or improvements before Ivy is officially released. The opt-in preview will also enable developers to switch between Ivy and the View Engine for building and rendering pipelines.

According to Stephen Fluin, developer advocate for Angular at Google, the preview is only intended to test backward compatibility. Performance and bundle size improvements are still a work in progress, he explained.

“The plan is to make sure most Angular applications keep working without significant changes, and then to focus on leveraging the improvements to the underlying framework,” Fluin wrote in a blog post.

After testing Ivy’s backward compatibility across thousands of test suites within Google, the team found 97 percent of test suites are passing.

Once developers test and report any issues they come across with Ivy, the Angular team plans to fix them whenever possible, acknowledge the issues and help developers identify if they come across existing issues, and automate them.

Going forward, the team will also work on reducing framework size and offer new ways of bootstrapping.

“We’re rapidly approaching a future where the benefits of Ivy are automatic and universal for all developers and Project Ivy comes to a conclusion,” Fluin wrote.

The post Angular previews next-generation compilation and rendering pipeline Ivy appeared first on SD Times.

via Click on the link for the full article