Washington state passes new facial recognition legislation

Microsoft has long been calling for stricter regulation of facial recognition technology, and now is revealing the progress it has made since its 2018 call for the government to publicly regulate the technology. 

Since their public call in 2018, a number of countries have banned or put moratoriums on the use of the technology, but none have “enacted specific legal controls that permit facial recognition to be used while regulating the risks inherent in the technology.” Things are changing, and in the state Microsoft calls home. On Tuesday, Washington governor Jay Inslee signed into law facial recognition legislation that had passed the state legislature a few weeks ago. Washington’s new law includes safeguards that ensure upfront testing, transparency and accountability for facial recognition, as well as protections of civil liberties.

RELATED CONTENT: 
Microsoft calls for public regulation of facial recognition technology 
Microsoft urges tech companies to create safeguards for facial recognition

The new law will force the risk of bias in facial recognition to be examined. The new law states that a state or local government agency will only be able to deploy facial recognition technology if it makes an API available for tests for “accuracy and unfair performance differences across distinct subpopulations.” It also forces vendors to disclose complaints of bias in their service.

The law also forces facial recognition vendors and governments to be both transparent and accountable — two of the ethical and human rights principles essential to AI. The law forces agencies to file a public notice of intent specifying the purpose the technology will be used for before deploying it. This is to ensure that the public is informed at the beginning of the adoption process. The law also requires that agencies establish a clear use and data management policy, data integrity and retention policies, and strong cybersecurity measures.

According to Microsoft, Washington is the first to have facial recognition rules that protect civil liberties and human rights. It adds protection against mass surveillance, added protection for specific human rights, procedural safeguards for criminal trials, and detailed transparency requirements relating to civil liberties.

“Ultimately, as we consider the continuing evolution of facial recognition regulation, we should borrow from the famous phrase and recognize that Washington’s law reflects ‘not the beginning of the end, but the end of the beginning.’ Finally, a real-world example for the specific regulation of facial recognition now exists. Some will argue it does too little. Others will contend it goes too far. When it comes to new rules for changing technology, this is the definition of progress,” Brad Smith, president of Microsoft, wrote in a post.

The post Washington state passes new facial recognition legislation appeared first on SD Times.

via Click on the link for the full article

SD Times news digest: .NET 5.0 Preview 2, the Lightstep observability platform, and Testlio 3.0 introduces networked testing

Microsoft announced the .NET 5.0 Preview 2, which contains a set of smaller features and performance improvements.

This includes code quality improvements in RyuJIT and changes to the garbage collector. 

The company said it is continuing to work on bigger features for the 5.0, some of which are starting to show up as initial designs at dotnet/designs.

Additional details are available here.

Lightstep releases observability platform
The Lightstep observability platform includes telemetry data in which Lightstep customers get access to memory, CPU, and network metrics for free.

It also includes new Error Analysis intelligence can resolve complex, multi-service regressions in less than a minute, and allows users to compare deployment versions instantly.

“We can analyze metrics the same way we do your trace data: by surfacing exactly what you need to resolve an issue or improve performance,” Lightstep wrote in a post.

Testlio introduces networked testing 
Testlio 3.0 provides new modules, a new UI/UX, new integration frameworks, and enhanced software engines. 

Testlio 3.0 now provides Tests and Runs and the upgraded Testlio Builds service now equips seamless package handoffs from CI/CD systems.

The updated version also offers networked testing capabilities such as burstable swarm teams, compressed testing windows, and connected real systems. 

Additional details are available here.

University teams up on live coding online bootcamp
Fullstack Academy and Louisiana University joined forces to offer a ‘Live Online” bootcamp. 

The 26-week part-time training program is a response to the surge in job growth in the Baton Rouge and New Orleans regions. 

The bootcamp will teach students how to monitor and secure systems, networks, and applications, as well as deploy offensive and defensive tactics needed to appropriately respond to cyber breaches.

Additional information is available here.

The post SD Times news digest: .NET 5.0 Preview 2, the Lightstep observability platform, and Testlio 3.0 introduces networked testing appeared first on SD Times.

via Click on the link for the full article

Gartner’s 3 requirements for APM

APM, as Gartner defines it in its Magic Quadrant criteria, is based on three broad sets of capabilities, and in order to be considered by Gartner an APM vendor, you have to have all three. Charley Rich, Gartner research director and lead author of its APM Magic Quadrant, explained:

The first one is digital experience monitoring (DXM). That, Rich said, is “the ability to do real user monitoring, injecting JavaScript in a browser, and synthetic transactions — the recording of those playback from different geographical points of presence.” This is critical for the last mile of a transaction and allows you to isolate and use analytics to figure out what’s normal and what is not, and understand the impact of latency. But, he cautioned, you can’t get to the root cause of issues with DXM alone, because it’s just the last mile.

RELATED CONTENT: Application Performance Monitoring: What it means in today’s complex software world

Digital experience monitoring as defined by Gartner is to capture the UX latency, errors — the spinner or hourglass you see on a mobile app, where it’s just waiting and nothing happens — and find out why. 

Rich said this is done by doing real user monitoring — for web apps, that means injecting JavaScript into the browser to break down the load times of everything on your page as well as background calls. It also requires the ability to capture screenshots automatically, and capture entire user sessions. This, he said, “can get a movie of your interactions, so when they’re doing problem resolution, not only do they have the log data, actual data from what you said when a ticket was opened, and other performance metrics, but they can see what you saw, and play it back in slow-motion, which often provides clues you don’t know.”

The second component of a Gartner-defined APM solution is application discovery diagnostics and tracing. This is the technology to deploy agents out to the different applications, VMs, containers, and the like. With this, Rich siad, you can “discover all the applications, profile all their usage, all of their connections, and then stitch that together to what we learn from digital experience to represent the end-to-end transaction, with all of the points of latency and bottlenecks and errors so we understand the entire thing from the web browser all the way through application servers, middleware and databases.”

The final component is analytics. Using AI, machine-learning analytics applied to application performance monitoring solutions can do event correlation, reduce false alarms, do anomaly detection to find outliers, and then, do root cause analysis driven by algorithms and graph analysis

The post Gartner’s 3 requirements for APM appeared first on SD Times.

via Click on the link for the full article

Application Performance Monitoring: What it means in today’s complex software world

Software continues to grow as the driver of today’s global economy, and how a company’s applications perform is critical to retaining customer loyalty and business. People now demand instant gratification and will not tolerate latency — not even a little bit.

As a result, application performance monitoring is perhaps more important than ever to companies looking to remain competitive in this digital economy. But today’s APM doesn’t look much like the APM of a decade ago. Performance monitoring then was more about the application itself, and very specific to the data tied to that application. Back then, applications ran in datacenters on-premises, and written as monoliths, largely in Java, tied to a single database. With that simple n-tier architecture, organizations were able to easily collect all the data they needed, which was then displayed in Networks Operations Centers to systems administrators. The hard work came from command-line launching of monitoring tools — requiring systems administration experts — sifting through log files to see what was real and what was a false alarm, and from reaching the right people to remediate the problem.

RELATED CONTENT: APMs are more important than ever for microcservice-based architectures

In today’s world, doing APM efficiently is a much greater challenge. Applications are cobbled together, not written in monoliths. Some of those components might be running on-premises while others are likely to be cloud services, written as microservices and running in containers. Data is coming from the application, from containers, Kubernetes, service meshes, mobile and edge devices, APIs and more. The complexities of modern software architectures broaden the definition of what it means to do performance monitoring.

“APM solutions have adapted and adjusted greatly over the last 10 years. You wouldn’t recognize them at all from what they were when this market was first defined,” said Charley Rich, a research director at Gartner and lead author of the APM Magic Quadrant, as well as the lead author on Gartner’s AIOPs market guide. 

So, although APM is a mature practice, organizations are having to look beyond the application — to multiple clouds and data sources, to the network, to the IT infrastructure — to get the big picture of what’s going on with their applications. And we’re hearing talk of automation, machine learning and being proactive about problem remediation, rather than being reactive.

 “APM, a few years ago, started expanding broadly both downstream and upstream to incorporate infrastructure monitoring into the products,” Rich said. “Many times, there’s a problem on a server, or a VM, or a container, and that’s the root cause of the problem. If you don’t have that infrastructure data, you can only infer.”

Rekha Singha, the Software-Computing Systems Research Area head at Tata Consultancy Services,  sees two major monitoring challenges that modern software architectures present.

First, she said, is multi-layered distributed deployment using Big Data technologies, such as Kafka, Hadoop and HDFS. The second is that modern software, also called Software 2.0, is a mix of traditional task-driven programs and data-driven machine learning models.  “The distributed deployment brings additional performance monitoring challenges due to cascaded failures, staggered processes and global clock synchronization for co-relating events across the cluster, she explained. ”Further, a Software 2.0 architecture may need a tight integrated pipeline from development to production to ensure good accuracy for data-driven models. Performance definition for Software 2.0 architectures are extended to both system performance and model performance.”

Moreover, she added, modern applications are largely deployed on heterogeneous architectures, including CPU, GPU, FPGA and ASICs. “We still do not have mechanisms to monitor performance of these hardware accelerators and the applications executing on them,” she noted. 

The new culture of APM
Despite these mechanisms for total monitoring not being available, companies today need to compete to be more responsive to customer needs. And to do so, the have to be proactive. Joe Butson, co-founder of consulting company Big Deal Digital, said, We’re moving to a culture of responding ‘our hair’s on fire,’ to being proactive,” he said. “We have a lot more data … and we have to get that information into some sort of a visualization tool. And, we have to prioritize what we’re watching. What this has done is change the culture of the people looking at this information and trying to monitor and trying to move from a reactive to proactive mode.”

In earlier days of APM, when things in application slowed or broke, people would get paged. Butson said, “It’s fine if it happens from 9 to 5, you have lots of people in the office, but then, some poor person’s got the pager that night, and that just didn’t work because what it meant in the MTTR — mean time to recovery — depending upon when the event occurred, it took a long time to recover. In a very digitized world, if you’re down, it makes it into the press, so you have a lot of risk, from an organizational perspective, and there’s reputation risk. 

High-performing companies are looking at data and anticipating what could happen. And that’s a really big change, Butson said. “Organizations that do this well are winning in the marketplace.”

Who’s job is it, anyway?
With all of this data being generated and collected, more people in more parts of the enterprise need access to this information. “I think the big thing is, 10-15 years ago, there were a lot of app support teams doing monitoring, I&O teams, who were very relegated to this task,” said Stephen Elliot, program vice president for I&O at research firm IDC. “You know, ‘identify the problem, go solve it.’ Then the war rooms were created. Now, with agile and DevOps, we have [site reliability engineers], we have DevOps engineers, there are a lot broader set of people that might own the responsibility, or have to be part of the broader process discussion.”

And that’s a cultural change. “In the NOCs, we would have had operations engineers and sys admins looking at things,” Butson said. “We’re moving across the silos and have the development people and their managers looking at refined views, because they can’t consume it all.” 

It’s up to each segment of the organization looking at data to prioritize what they’re looking at. “The dev world comes at it a little differently than the operations people,: Butson continued. “Operations people are looking for stability. The development people really care about speed. And now that you’re bringing security people into it, they look at their own things in their own way. When you’re talking about operations and engineering and the business people getting together, that’s not a natural thing, but it’s far better to have the end-to-end shared vision than to have silos. You want to have a shared understanding. You want people working together in a cross-functional way.”

Enterprises are thinking through the question of who owns responsibility for performance and availability of a service. According to IDC’s Elliot, there is a modern approach to performance and availability.  He said at modern companies, the thinking is, “ ‘we’ve got a DevOps team, and when they write the service, they own the service, they have full end-to-end responsibilities, including security, performance and availability.’ That’s a modern, advanced way to think.”

In the vast majority of companies, ownership for performance and availability lies with particular groups having different responsibilities. This can be based on the enterprise’s organizational structure, and the skills and maturity level that each team has. For instance, an infrastructure and operations group might own performance tuning. Elliot said, “We’ve talked to clients who have a cloud COE that actually have responsibility for that particular cloud. While they may be using utilities from a cloud provider, like AWS Cloud Watch or Cloud Trail, they also have the idea that they have to not only trust their data but then they have to validate it. They might have an additional observability tool to help validate the performance they’re expecting from that public cloud provider.” 

In those modern organizations, site reliability engineers (SREs) often have that responsibility. But again, Elliot here stressed skill sets. “When we talk to customers about an SRE, it’s really dependent on, where did these folks come from?” he said. “Where they reallocated internally? Are they a combination of skills from ops and dev and business? Typically, these folks reside more along the lines of IT operations teams, and generally they have operating history with performance management, change management, monitoring. They also start thinking about are these the right tasks for these folks to own?  Do they have the skills to execute it properly?”

Organizations also have to balance that out with the notion of applying development practices to traditional I&O principles, and bringing a software engineering mindset to systems admin disciplines. And, according to Elliot, “It’s a hard transition.”

Compound all that with the growing complexity of applications, running the cloud as containerized microservices, managed by Kubernetes using, say, an Istio service mesh in a multicloud environment. 

TCS’ Singha explained that containers are not permanent, and microservices deployments have shorter execution times. Therefore, any instrumentation in these types of deployment could affect the guarantee of application performance, she said. As for functions as a service, which are stateless, application states need to be maintained explicitly for performance analysis, she continued.

It is these changes in software architectures and infrastructure that are forcing organizations to rethink how they approach performance monitoring from a culture standpoint and from a tooling standpoint.

APM vendors are adding capability to do infrastructure monitoring, which encompasses server monitoring, some amount of log file analyst, and some amount of network performance monitoring, Gartner’s Rich said.Others are adding or have added capabilities to map out business processes and relate the milestones in a business process to what the APM solution is monitoring. “All the data’s there,” Rich said. “It’s in the payloads, it’s accessible through APIs.”  He said this ability to bring out visualize data can show you, for instance, why Boston users are abandoning their carts 20% greater than they are in New York over the last three days, and come up with something in the application that explains that.

The post Application Performance Monitoring: What it means in today’s complex software world appeared first on SD Times.

via Click on the link for the full article

Outreachy awarded IBM’s Open Source Community Grant

IBM has announced Outreachy is the winner of its $50,000 Open Source Community Grant. IBM started awarding quarterly grants last October as an effort to promote nonprofits dedicated to education, inclusiveness and skill building. Girls Who Code were awarded the first IBM Open Source Community Grant. 

Oureachy provides an internship and mentorship program in the open source and free software community for groups that face under representation, systemic bias and/or discrimination in the technolgoy idustry. The internships are done remotely, which according to IBM will become essential as people are forced to work from home due to the COVID-19 pandemic. Interns are provided a $5,500 stipend for a three-month internship and an additionally $500 to travel to conferences or events.

“The current COVID-19 crisis underscores the inequities in our society. People who have jobs that can be done remotely find themselves in a stable situation and able to weather this crisis at home while many workers have no immediate way to earn a living without risking their lives,” said Karen Sandler, executive director of the Software Freedom Conservancy, the parent organization of Outreachy. “Getting paid home-based work to folks who are subject to systemic bias has never been more important than it is right now. We’re working to make this Outreachy round the biggest one ever to help the most people right now. This grant will make a big difference to offset the reduction in some of our corporate sponsorship from companies that are struggling.”

RELATED CONTENT: Developers take on COVID-19 with open-source projects, hackathons

The grant is split up between a $25,000 cash award and a $25,000 technology award to go to education and career development activities. 

“Our Open Source Community Grant identifies and rewards future developers and open source leaders and creates new tech opportunities for underrepresented communities,” Todd More, vice president of open technology and developer advocacy at IBM, and Guillermo Miranda, vice president and global head of corporate social responsibility at IBM, wrote in a post. “Our open source community nominated a number of nonprofits doing incredible work and, while voting was close with plenty of deserving organizations in the mix, we awarded Outreachy the most votes for their commitment to providing paid internships to underserved and underrepresented minorities.”

The post Outreachy awarded IBM’s Open Source Community Grant appeared first on SD Times.

via Click on the link for the full article

SD Times Open-Source Project of the Week: CHIME

The COVID-19 Hospital Impact Model for Epidemics (CHIME) is a tool that provides up-to-date projections of what additional resources will be required in certain hospitals during the COVID-19 outbreak. 

It shows informed estimates of how many patients will need hospitalization, ICU beds, and mechanical ventilation over the coming days and weeks will be crucial inputs to readiness responses and mitigation strategies, according to the Predictive Healthcare team at Penn Medicine, which developed the project. 

The tool uses a SIR Model, which  computes the theoretical number of people infected with a contagious illness in a closed population over time. 

Hospitals can enter information about their population and then run a standard SIR model to project the number of new hospital admissions each day. This will result in best- and worst-case scenarios to assist with capacity planning. The Doubling Time parameter in the SIR model defines how quickly a disease spreads.

The tool was originally designed for making projections in Philadelphia, but is capable of providing input on other cities through a switch of parameters. 

The project is currently in the testing stage and is looking for help in project management, DevOps professionals to ensure that the dashboard can handle the increase in traffic, and Python developers, since it is the predominant language used in the project. 

Project updates in realtime are available here.

The post SD Times Open-Source Project of the Week: CHIME appeared first on SD Times.

via Click on the link for the full article

A guide to DevOps testing tools

The BlazeMeter Continuous Testing Platform is a complete solution for shift-left continuous testing. The platform includes UI functional testing, user experience testing, API testing and monitoring, performance testing, and virtual services. All capabilities are deeply integrated in an intuitive workflow designed for agile teams and provide robust support for popular open source tools. Delivered in SaaS with support for multiple clouds or private cloud, it is a powerful tool for delivering innovation with quality and speed. 

Mobile Labs: The company’s patented GigaFox is offered on-premises or hosted, and solves mobile device sharing and management challenges that arise during development, debugging, manual testing, and automated testing. A pre-installed and pre-configured Appium server with custom tools provides “instant on” Appium test automation. GigaFox enables scheduling, collaboration, user management, security, mobile DevOps, and continuous automated testing for mobility teams spread across the globe and can connect cloud devices to an industry-leading number of third-party tools such as XCode, Android Studio, and many commercial test automation tools. 

Cantata from QA Systems is a certified standards compliant automated unit and integration testing tool for embedded C/C++ code. Highly automated test case generation, code coverage, static metrics and requirements tracing are supplemented by architectural analysis and test status management with Test Architect and Team Reporting add-ons. Cantata is integrated with an extensive set of development toolchains, from cross-compilers and debuggers to ALM and continuous integration tools.

Quali’s CloudShell Colony helps organizations streamline effective application testing by providing development and testing teams with self-service access to automated test environments while delivering security, governance, and cost control. By removing error-prone manual inefficiencies and conflict-ridden static test environments, it creates a solid foundation for Continuous Testing and DevOps. Founded in 2007, Quali helps businesses accelerate innovation, improve quality, and control costs with on-demand access to automated application and infrastructure environment provisioning across any cloud.

RELATED CONTENT:
Creating a clear testing path to DevOps takeoff
How do you help test in DevOps?

BMC AMI DevOps for Db2 accelerates the delivery of new and updated applications to the market. It comes with out-of-the box integration with Jenkins, an application development orchestration tool. This provides the power to automatically research database schema change requirements, streamline the review and approval process, and safely implement the database schema changes, making development and operations teams more agile. 

Cobalt.io is modernizing penetration testing by building hacker-like testing into development cycles. Pentests are performed by a global team of vetted, highly-skilled professionals with deep domain expertise. Cobalt.io offers the first of its kind, find-to-fix workflow that allows software companies to find and remediate vulnerabilities across an application portfolio, giving modern agile development teams the ability to do fast and frequent pentests and making development and security operations seamless with its integrations.

Eggplant enables companies to view their technology through the eyes of their users. The continuous, intelligent approach tests the end-to-end customer experience and investigates every possible user journey, providing unparalleled test coverage essential to DevOps success. Our technology taps AI and machine learning to test any technology on any device, operating system, or browser at any layer, from the UI to APIs to the database. 

GitLab helps delivery teams fully embrace continuous integration to automate building, packaging, and testing their code. GitLab’s industry-leading CI capabilities enable automated testing, Static Application Security Testing, Dynamic Application Security testing, and code quality analysis to provide fast feedback to developers and testers. With pipelines that support concurrent testing and parallel execution, teams get insight into every push, allowing them to deliver higher quality code faster.

HCL: AppScan is an automated application security testing and management tool. The company recently released version 10 of the solution, which features on securing DevOps. New features here include interactive application security testing capabilities that go beyond SAST, DAST and SCA; out-of-the-box integrations with DevOps toolchains, and a new plugin to help developers identify vulnerabilities in their dev environments. 

HPE Software’s automated testing solutions simplify software testing within fastmoving agile teams and for Continuous Integration scenarios. Integrated with DevOps tools and ALM solutions, HPE automated testing solutions keep quality at the center of today’s modern applications and hybrid infrastructures. 

IBM: Continuous Testing provides an end-to-end picture of how products react to new code. It does this early in the development lifecycle which gives Product teams confidence to push incremental code changes more frequently. IBM’s cloud-native Continuous Testing platform Rational Test Automation Server along with market leading Rational Test Workbench and Virtualization Server empowers teams to achieve this over a wide range of scenarios covering mobile, cloud, cognitive, mainframe and more with minimal coding

Micro Focus: Minimize risk and maximize user satisfaction by testing early, often, and at scale with Micro Focus’ industry-leading, integrated portfolio for continuous and comprehensive testing of web, mobile, and enterprise applications. With extensive technology support and AI-driven capabilities, you can test complex load, stress and performance scenarios, and implement resilient functional test automation throughout your entire DevOps pipeline. Our tools provide an end-to-end view of quality, with specific, actionable and timely feedback on your applications’ readiness status.

OverOps is a continuous reliability solution that helps companies prevent critical errors that are missed by testing and static analysis. Using OverOps teams can quickly identify and resolve critical software issues. Unlike static code, log analyzers and APMs that rely on foresight, OverOps analyzes your code at runtime to produce specialized data that tells when, where and why code breaks. OverOps runs in the cloud or on-premises with robust CI/CD integrations to ensure software reliability from testing into production. To learn more about why global organizations trust OverpOps visit www.overops.com.   

Perfecto: Perfecto offers a cloud-based continuous testing platform that takes mobile and web testing to the next level. It features a: continuous quality lab with smart self-healing capabilities; test authoring, management, validations and debugging of even advanced and hard-to-test businesses scenarios; text execution simulations; and smart analysis. For mobile testing, users can test against more than 3,000 real devices, and web developers can boost their test portfolio with cross-browser testing in the cloud

Progress: Telerik Test Studio enables QA and SDET professionals to create functional, performance and load tests that work immediately. Patent-pending multi-sense discovery eliminates broken tests and technical debt that plague other testing solutions

QASymphony’s qTest is a Test Case Management solution that integrates with popular development tools. QASymphony offers qTest eXplorer for teams doing exploratory testing. 

Sauce Labs: With more than 3 billion tests run and counting, the Sauce Labs Continuous Testing Cloud is the only continuous testing platform that delivers a 360-degree view of your customers’ application experience. It ensures web and mobile applications look, function, and perform exactly as they should on every browser, OS, and device, every single time.

ShiftLeft Inspect is a next-generation static code analysis solution, purpose-built to insert security into developer workflows without slowing them down. It accomplishes this by scanning code as fast as the pull request or build with the accuracy required to share directly with developers, without manual triage. Its coverage extends beyond technical vulnerabilities to business logic flaws, data leakage, hard-coded literals and insider threats.  

At SmartBear, we focus on your one priority that never changes: quality. We know delivering quality software over and over is complicated. So our tools are built to streamline your process while seamlessly working with all the tools you use – and will use. Whether it’s Swagger, Cucumber, ReadyAPI, Zephyr, TestComplete, or more, our tools are easy to try, easy to buy, and easy to integrate. 

Testlio is the leader in managed app testing. With robust client services, a global network of validated testers, and a comprehensive software platform, we provide a suite of flexible, scalable, and on-demand testing solutions. When apps must perform brilliantly, Testlio helps ensure world-class customer experiences. In any location. On any device. In any language.

The post A guide to DevOps testing tools appeared first on SD Times.

via Click on the link for the full article

KubeMQ achieves Red Hat OpenShift Operator Certification

The Kubernetes message queue and message broker solution provider KubeMQ has announced its Kubernetes Operator is now Red Hat OpenShift Operator certified

The OpenShift Operator Certification is meant to give users confidence when building next-generation projects on Red Hat’s Kuberentes and containers app platform OpenShift. With the certification, users will be able to deploy KubeMQ through the Red Hat OpenShift Operator catalog.

‘KubeMQ is a Kubernetes message queue broker, enterprise-grade, scalable, highly available and more secure. Helping enterprises to build stable microservices solutions that can be easily scaled as well as enabling additional microservices to be quickly developed and added to the solution,” wrote in a post

KubeMQ is Kubernetes native, easy to deploy, provides enterprise-grade assurance and is available on all messaging patterns. 

“We are proud to deliver a Red Hat OpenShift Certified Operator. It is an important milestone for KubeMQ as it contributes to earning industry recognition as a qualified enterprise solution. The KubeMQ Operator will provide enterprises with simple and robust access to our Kubernetes native message queue,” said Gil Eyal, KubeMQ’s CEO.

The post KubeMQ achieves Red Hat OpenShift Operator Certification appeared first on SD Times.

via Click on the link for the full article

SD Times news digest: Quick Base Sandbox, Google Pay’s Business Console, and XML Spy

Quick Base has announced a new way for business professionals to work with IT and test low-code applications. The new Sandbox capability enables cross-functional teams to quickly create and optimize business-critical applications without risking disruption.

Sandbox provides a place to easily collaborate with IT when making changes to new and existing workflows, while giving IT departments stronger governance through tighter control over the development process, according to the company. 

Additional details are available here.

Google Pay introduces Business Console
Google Pay’s Business Console is a new tool that streamlines the process of integrating Google Pay into apps and websites.

Users will also be able to discover resources, get support at different stages throughout their integration, and keep track of progress along the way, according to the companty.

“And this is only the beginning. As we add new features, the Business Console will be your go-to place to manage all your new and existing integrations with Google Pay, see how your integrations perform over time, and add support for other business- and developer-focused products,” Google wrote in a blog post.

Collibra raises $112.5 million for data-driven decisions
Collibra raised $112.5 million to further its Data Intelligence solution aimed at improving the quality of business decisions driven by data. The company’s total venture funding is now $345.5 million. 

Collibra’s suite of products helps organizations address a breadth of business challenges, including data privacy and protection, compliance and risk mitigation, operational efficiency and cost reduction, according to the company. 

Additional details are available here.

UiPath expands RPA certification program
UiPath extended its training platform and certification program to accelerate workforce readiness. 

The new courses include RPA Associate, which is 8the foundational certification level for all RPA job roles and RPA Advanced Developer as a second certification level. 

“UiPath is committed to investing in the workforce of the future through its UiPath Academy, which is focused on training and reskilling for the jobs of today and the jobs of tomorrow,” UiPath wrote in a post.

New tools for XML and JSON editing
The latest release of XML Spy adds several user-requested features for working with JSON and XML, as well as new functionality for debugging XPath, updated standards and database support, and more.

The new auto-backup feature in XMLSpy is great for recovering files in the event of a software or hardware crash.

Additional details are available here.

The post SD Times news digest: Quick Base Sandbox, Google Pay’s Business Console, and XML Spy appeared first on SD Times.

via Click on the link for the full article

How do you help test in DevOps?

Shamim Ahmed, CTO for DevOps Solutions at Broadcom, a global technology company:
The promise of DevOps is that we could deliver more, faster, with no sacrifice in quality. In reality – we see some common blocks to DevOps success. At Broadcom, we address those challenges: we help eliminate the testing bottleneck and bring teams together in a single platform that lets everyone work the way they want to work. Agile teams want to work in their IDEs and command lines. They want to use open source, and they want tools that are seamlessly embedded into the CI/CD pipeline. Traditional testers want to use a UI, and features like scriptless testing. 

Broadcom makes this simple with BlazeMeter Continuous Testing Platform, a single application that delivers all the functionality you need to make continuous testing a reality. BlazeMeter Continuous Testing Platform is designed for every team across the SDLC. It can be used “as code” in the IDE or with the easy UI. All teams can share assets and align around common metrics and AI-driven insights. AI is also used to optimize test cycles, predict defects and highlight areas for continuous improvement. 

RELATED CONTENT: Creating a clear testing path to DevOps takeoff

Most organizations know that DevOps success depends on the ability to shift left and right, and deliver new capabilities with volume and velocity. BlazeMeter really helps them do that – all the way from aligning the business and dev around model-based requirements to using data from production to drive continuous improvement. And best of all – we make it easy. It’s literally click to start and there’s a free version so you can get started today

Dan McFall, CEO of Mobile Labs, an enterprise mobile app testing company
For Mobile Labs, we really tackle the problem of mobile devices as enterprise infrastructure. What that means is answering the questions of: Where are my devices? Who has them? What state are they in? What is on them? What application versions are loaded? What can they see? All of the things you need to basically have mobile devices be available at the development and test environment. We solve that problem, and then make them essentially act just like virtual machines. You can call them via API layers. You can build a seamless, headless process around our infrastructure component into your DevOps process. You can have a broad and deep testing space that gives you the confidence that you have covered your bases. 

We are also looking into more scripting as well, such as low code or no code scripting environments, more behavioral-driven environments. We are seeing that a lot of people are resource challenged, and don’t have folks who can write mobile automation. We are going to make it easier for people to do mobile automation from a scripting perspective this year. 

Those are the areas where we are continuing to help, which is just the right people with the right skills with the access to the right environments at the right time. That is going to be a really key aspect to having a successful DevOps strategy.  

Matt Davis, managing director for QA Systems, a software quality company
QA Systems helps DevOps engineers overcome the challenges of test automation and tool integration by focusing on repeatable steps and command line interfaces. Not everything in testing can be automated. However, by removing tedious manual steps from the process, we help engineers focus on building the right tests and solving problems.

Automating checks on software quality metrics, architectural relationships, hierarchy and dependencies in your code, ensures that you don’t deviate from your intended design or your code become less maintainable as it evolves. Combining automatic test case generation, integrated code coverage, a change based test build system, plugging testing gaps automatically and linking your tests directly to your requirements, engineers can now access unprecedented test capabilities. Code level analysis and testing should be at the heart of DevOps, where developers can use them efficiently every time code is checked in. QA Systems have found that fully automating these capabilities on the basis of open standards and integrated solutions, significantly enhances the functionality of the verification CI/CD pipeline.

Maya Ber Lerner, CTO of Quali, a cloud automation and digital transformation company
Test automation is great, but it only solves one part of the DevOps testing problem. To ensure the quality of your application, your developers and testers need instant access to dynamic, production-like environments throughout the value-stream to develop applications and run automated tests effectively. However, time-consuming, error-prone manual processes for setting-up and tearing down these environments creates a huge bottleneck—leading to multiple teams struggling to share static environments, or skirting around ITOps and implementing shadow-IT practices, which can greatly drive up costs and bypass security best practices.

Environment as a Service solutions, like Quali’s CloudShell Colony, make it possible for developers and testers to gain immediate access to dynamic, production-like environments on-demand with one click, or automatically by connecting your CI/CD tools to accelerate the value stream. We even have a customer that set up a Slack-bot to provision environment requests.

With CloudShell Colony, you can bridge the gap between Dev, Sec, and ITOps leveraging the speed of self-service, automated set-up and tear-down of dynamic environments across the value stream coupled with policy-based configurations ensuring security, compliance, infrastructure utilization, and costs control all from one tool.

The post How do you help test in DevOps? appeared first on SD Times.

via Click on the link for the full article