Industry watch: The little dirty data secret

Our industry has a dirty little secret. Come closer, I’ll whisper it to you.

(Much of the data held in organizational databases, warehouses, lakes and stores is not very good.)

There, I’ve said it. Data quality remains a persistent problem for enterprises, and there are many reasons as to why. It could be that fields were filled out incorrectly, or that differences between what things are called are pervasive. Or, calculations that were done and stored have grown out of date or were incorrect to begin with.

RELATED CONTENT: 2020: The year of integration

Do you live on Main St. or Main Street? Is your job software engineer, or just developer, or — as has been seen — code ninja? How do you know if they’re the same thing or not? Or, is 555-111-6666 an actual phone number? It looks like one. It has 10 digits. But is it valid? Is’ an actual email address? It looks like one. But will anything mailed to this address go through or get bounced?

What if your company relies on data for annual financial predictions, but the underlying data is fraught with errors? Or what if your company is in trucking, and can’t maximize the amount of goods a truck can hold because the data regarding box sizes is incorrect?

Data quality is “something everyone struggles with, just keeping it clean at point of entry and enriched and useful for business purposes,” Greg Smith of data quality company Melissa told me at the company’s offices in Rancho Santa Margarita (mas tequila, por favor!), California. And, because few companies want to talk about the issue, Smith said, “That prevents other people from really knowing not only how pervasive the problem is, but that there are lots of solutions out there.”

“A position I was in at another company before this, we’d just get return mail all the time and it was just the cost of doing business,” he continued. “We were marketers but we weren’t really direct mailers. We had enough knowledge to get something out, but we didn’t really know all of the parameters on how to process it to prevent undeliverable mail, and it was just really the cost of doing business. We’d get these big cartons of mail back, and we’d just dump them, and nobody would say anything.”

Do people not talk about because they don’t want their customers to know just how bad their data problem is? “Oh yeah, we have that all the time,” Smith said. “It’s almost impossible for us to get case studies. The first thing they do with us is slap on an NDA, before we even get to look under the kimono, as Hunter Biden would say. Before we see their dirty laundry, they definitely want the NDA in place. We’ll help them out, and they’ll never, ever admit to how bad it was.”

In some organizations, some team is in charge of making the data available to developers and different departments, and there is another team making sure it’s replicated across multiple servers. But who’s really in charge of making sure it’s accurate? “That’s where we struggle with talking about the stewards,” Smith said. “Do they really care how accurate it is if they’re not the end consumer of the data?”

Throwing another wrench into all of this is the data matching rules outlined in the European Union’s General Data Protection Regulation. Smith said it seems to contradict well-understood data management practices. “One of the things those guys are saying is typical MDM logic is 180 degrees from what GDPR is recommending,” Smith said. “Traditionally, MDM, if it has a doubt about whether or not two records are identical or duplicates, it’s going to err on the side of, they’re not. What’s really the lost opportunity cost associated with merging them was greater than sending a couple of duplicate catalogs before you could ascertain that. GDPR says, if there’s almost any doubt that Julia Verella and Julio Verella are actually the same person, you’ve got to err on the side that they are the same person. So the logic behind the matching algorithms is completely different.” 

In software development, we hear all the time that organizations should “shift left” practices such as testing, and security. So, why don’t we as an industry shift data quality left? I get that is a massive undertaking in and of itself, and organizations are making progress in this regard with anomaly detection and more machine learning. But if,as everyone says, data is the lifeblood of business going forward, then data scientists must be open to talking about solutions. Because if your system breaks because data you’re connecting to from an outside source is incorrect or invalid, you, your partners and your customers suffer.

The post Industry watch: The little dirty data secret appeared first on SD Times.

via Click on the link for the full article

The critical standard for design thinking

Experience is top of mind for companies across many verticals, and for good reason—user experience is expected to replace price and product as the key brand differentiator by next year. Delivering remarkable experiences requires thoughtful experience design (XD)—driven by a human-centric approach to solving problems.

When we talk about human-centric approach, we think about design thinking, that becomes more and more widely used. We talk about how we should create solutions with users in mind, talk about their thoughts and feelings, make empathy maps and put a lot of stickers on the wall – but we often exclude one the most important things, the actual user. Empathy should be built on real user experiences, pains and frustrations, not on our assumptions about them. DT is a great approach to find better, more effective and creative solutions to problems — but only when it’s done right. Problem-solving without having internal and external facts and perspectives will inevitably result in failure, regardless of how many stickers you will use on the way. Ensuring truly optimal experiences requires DT to be coupled with critical thinking and real research data. This means moving beyond the developer’s and designer’s perspective and experience to include first- and third-party data that provides an accurate picture of the customer mind.

RELATED CONTENT: UX design: It takes a village

Critical thinking
Too often, DT is driven by the imaginations and biases of those empowered to solve a problem. This approach is counter to critical thought, which includes rational, skeptical, and unbiased evaluation of factual evidence.

Without unbiased, critical thought, DT is ineffectual—it becomes a design process where all ideas are good ones, and best intentions overshadow best practices. DT without real research data and critical thinking is exemplified in a 2010 international case study where a South African company sought to solve pervasive clean water issues for tribal villagers. 

Plight of the PlayPump
The would-be benefactors of potable water originated the concept of providing fresh water to women and children in Africa by way of a “merry-go-round” playground pump system. The PlayPump was designed to solve two problems: provide a vital water supply and offer a recreational experience for children. Initially, the idea was widely applauded and supported financially.

In hindsight, we know the PlayPump failed because proper consideration wasn’t given to the culture served. African children had no historical reference (or inclination) to push or jump onto a spinning wheel, or hold on until the momentum stopped. Additionally, using child’s play as a means of manual labor raised ethical questions. Consequently, fixtures went unused and, in many cases, villagers removed the PlayPumps in favor of the original water pumps that had been replaced.

This is a clear example of how design thinking—even driven by the best intentions—can and will fail when it is based on assumptions and one’s own experience without involving the individuals who will be using the solution and ensuring proper testing of the solution. All facts (cultural, environmental, and more) must be considered.

Take it one step further
Main stages of design thinking are well known: empathize, define, ideate, experiment, and evolve. There might be a different number of steps and different names, but the idea is simple – before you jump to a solution you need to understand who are you creating this solution for and what pains and challenges do they have, then you ideate with a cross-functional team that can provide wide ranges of perspectives, and finally you validate your solution by testing it with users. There is no question that critical thinking is used in the design thinking process. What must be ensured is that at each stage, critical thought is given outside of the context and focus of creating the solution from the developer’s and designer’s inherently limited perspective. Let’s look at how this applies to each of the five stages of DT.

By definition, empathy is the sharing of another person’s state of mind and emotions. Proper time and resources must be given to research, collecting all relevant data, and applying critical thinking to fully understand those who are to be served by a solution. If a team cannot relate to the customer’s state of mind, empathy is impossible. In the case of PlayPump, designers had compassion for thirsty villagers, but no understanding of their environment, cultural differences and day-to day life.

Insights collected during the empathize stage are applied to form a problem statement. All too often we start ideating a solution before we made sure that what is actually being solved is the right problem. We have to make sure that we address the root-cause of the problem, not just its symptoms. All too often this is also expressed from the company’s point of view, rather than the end-users’. Critical thought must be given to ensuring relevance and benefit to the customer first. With PlayPump, using children to manually pump water in a playful way may have been cost-effective (vs. solar-powered pumps for example), but failed to work as the children did not behave as expected. Ultimately PlayPump generated criticism due to its cultural and ethical insensitivity.

Critical thinking is vital at this stage to ensure out-of-the-box ideation that goes beyond new and creative ways to solve a problem. While the ideation stage is intended to be inspirational and collaborative, it is far too easy to become distracted in the process and lose sight of the empathic and well-defined vision. Once the idea of using playground equipment to pump water was agreed, it is likely the company focused on what type of play equipment was best suited—with no pause to consider the possible missteps prior to this stage of the design process.

Critical thinking is especially present during prototyping. It is easy to get blinded with one’s own solution and become over-protective trying to prove that solution is right, rather than actually validating if it really is. By including real users in testing and staying unbiased, team will achieve a more efficient and robust experimental process. Acceptance, improvements, and even rejections must be driven by user experiences to determine solution constraints and challenges. Clearly villagers were not included in the process to determine the PlayPump prototype’s efficacy.

This iterative process is driven by critical thinking. Failing fast and learning on experiment results is key to great solution. The experimenting stage redefines, informs, and drives changes and refinements. Here, critical thinking—based on perpetual learning—is applied to understand how end-users think, behave, and feel in order to (re)empathize and optimize the experience. Learnings drive the non-linear DT process back to defining problems to be solved.

The post The critical standard for design thinking appeared first on SD Times.

via Click on the link for the full article

JFrog announces free C/C++ DevOps package search and discovery tool

JFrog has announced ConanCenter, an open-source, decentralized and multi-platform package manager for developers to create and share native binaries. 

ConanCenter aims to improve C and C++ DevOps package search and discovery by providing a better search UX and an easy-to-find list of configuration options. 

“With more and more industries integrating onto Conan, including automotive, robotics, IoT, and healthcare, JFrog’s newly updated ConanCenter represents an important step in giving developers access to a curated and robust central public repository,” JFrog wrote in a post.

The company said that it is planning to integrate the center with other JFrog tools to make it even easier to securely access all available packages. 

According to JFrog, the new solution will provide tools such as:

  • Simple, faster search and discovery of C and C++ libraries
  • Better search UX and an easy to find list of configuration options
  • Recipe information available upfront
  • Backed by a free binary building service for multiple OS’s, architectures and compilers

“Most of the IoT world uses C/C++ and JFrog wants to enable DevOps for the IoT community in our next leap,” said Yoav Landman, the CTO of JFrog.

The post JFrog announces free C/C++ DevOps package search and discovery tool appeared first on SD Times.

via Click on the link for the full article

Report: Maojrity of UK websites fail to conform with the EU’s GDPR

Majority of UK websites are still not following proper cookie content practices, despite the General Data Protection Regulation that went into effect in 2018. A new paper from researchers at MIT, UCL and Aarhus University also showed most UK websites don’t conform to the requirements of the GDPR.

The purpose of the study was to determine how consent management platforms (CMPs) affect consent choices. CMPs were designed to help websites conform to the GDPR’s requirements for consent when collecting personal data.

As part of the study, researchers scraped the designs of the five most popular CMPs across the top 10,000 websites in the UK. They found that only 11.8% of websites met the minimal requirements set based on the GDPR.

The researchers also conducted an experiment with 40 participants that investigates how eight common designs affect consent choices. Notification style (banner or barrier) doesn’t have an effect, removing the opt-out button from the first page increases consent by 22-23%, and providing more granular controls decreases consent by 8-20%.

The researchers hope that this study provides a basis for necessary regulatory action to enforce the GDPR, especially focusing on centralized, third-party CMP services as an effective way for increasing compliance.

The post Report: Maojrity of UK websites fail to conform with the EU’s GDPR appeared first on SD Times.

via Click on the link for the full article

SD Times news digest: Saumsung’s Galaxy XCover Pro for enterprises, C++ inliner improvements, and Apache project updates

Samsung introduced the Galaxy XCover Pro, an enterprise-ready smartphone built for business. 

The Galaxy XCover Pro allows users to tailor their experience with two programmable keys to create custom actions with one click, simplifying things like opening a scanner, turning on a flashlight or launching a CRM app without having to swipe through apps, the company explained.

The phone is also protected by Samsung Knox, a defense-grade multi-layered security platform, which provides users with security features such as hardware-backed protection, data isolation and encryption and boot and run-time protection.

The full details on the new phone are available here.

C++ inliner improvements
Visual Studio 2019 versions 16.3 and 16.4 include improvements to the C++ inliner such as the ability to inline some routines after they have been optimized. 

Modules that make heavy use of Eigen will see a significant throughput improvement, according to Microsoft. The optimizer takes up to 25-50% less time for such repros. 

Microsoft added that depending on your application, users may see minor code quality improvements and/or major build-time (compiler throughput) improvements. 

The full details are available here.

The latest Apache project updates
The Apache Software Foundation listed many project updates from the previous week including the release of Apache Jackrabbit 2.20.0, Apache Commons Codec 1.14, OpenNLP 1.9.2, HttpComponents Core 5.0 beta 11 and the release of Apache Wicket 7.16.0 and 8.7.0.

The open-source content repository for the Java platform, Apache Jackrabbit 2.20.0, is an incremental feature release based on and compatible with earlier stable Jackrabbit 2.x releases.

The beta of Apache HttpComponents improves the handling of illegal or invalid requests on the server side and fixes a number of defects in HTTP/2 protocol code found since the last release, according to the Apache Software Foundation. 

The post SD Times news digest: Saumsung’s Galaxy XCover Pro for enterprises, C++ inliner improvements, and Apache project updates appeared first on SD Times.

via Click on the link for the full article

Guest View: Embracing a DevOps culture

DevOps, which refers to the increased communication and collaboration between development and IT operations, is an ever-changing, sometimes complicated term. While “dev” and “ops”were once siloed with separate philosophies, practices, tools, and workflows, they’re merging into one. The result? A more efficient, reliable process and product that is helping organizations create stronger ties between all stakeholders throughout the development lifecycle, so it’s no surprise that DevOps is rapidly gaining popularity around the world.

CI/CD pipelines are expanding
Getting the most value out of your value streams

In my experience, organizations that fail to embrace DevOps do so at their own considerable risk. Not too long ago, a major real estate developer was looking to solve a critical problem. The company’s application kept crashing and they couldn’t figure out why. Essentially, the company’s .NET installation was having problems with a third-party web asset management library, which had specific write-to-disk configuration requirements. These requirements were configured properly in the development environment, but not in production. Because developers were siloed from production — with no process for keeping these environments in sync — the company was unaware of the oversight. The end result? The company encountered ongoing performance problems in production and was
unable to identify the root cause of ongoing application instance crashes that were masked by autorestart policies.

On a foundational level, a dysfunctional culture was largely responsible for the company’s production mishaps, with “code being thrown over the wall” from development to production. Communication between these groups was so poor that a contractor was the primary liaison between developers, operations and management. Additionally, there
was a loss of tribal knowledge every time a technical practitioner left the organization. None of the devs knew anything about the troublesome third-party tool, nor how it was being used. Even the contractor — the sole link between the siloed factions — was unaware of the problematic utility and the critical role it played.

Embracing a new culture
As we enter a new era of DevOps that takes advantage of collaboration, it is imperative that IT leaders look at the current state of their infrastructure and consider not only the technologies that will further enhance their application environments but also the cultural changes that might be necessary.

For starters, communication and knowledge transfer between teams are critical. Agile development practices tend to come with methods of communication available to the whole team, be they daily standups, Scrum or Kanban boards, or narrow Slack channels. The modern DevOps organization should include representatives from all teams in these channels so that everyone can participate and be aware of what is happening, and when.

Embracing automation
The number one lesson from DevOps and Agile is the need for automation. The first automation tool that most organizations adopt is Continuous Integration (CI), so that code is built early and often. To make this work well, organizations will also standardize environments. Each environment — development, production, and so on — should be as similar as possible, with continuous integration and eventually, continuous deployment (CD).

Once code is being built regularly we want to improve testing so that we ensure code is of the
highest quality before it is deployed. 2019 has seen some major innovations when it comes to automated testing tools, moving beyond unit tests and making it much easier for organizations to build the functional tests that reflect how users actually engage with applications. Cloud computing has made it possible for organizations to run thousands of functional tests in a short period of time, automatically. New analytics tools help organizations understand what code is changing and what needs to be tested, which allows this process to be optimized even further.

Finally, more organizations are embracing modern application monitoring tools that allow both Devs and Ops to understand how applications are working in production. Overall, this means everyone is contributing to the success of the application which leads to end-user happiness and better business outcomes.

The post Guest View: Embracing a DevOps culture appeared first on SD Times.

via Click on the link for the full article

Mark Zuckerberg sets long-term goals for 2030

Facebook CEO Mark Zuckerberg is no longer looking to set short term New Year’s resolutions. Instead, he has a vision for 2030, as explained in a lengthy Facebook post, that spans augmented reality glasses, a large investment in small businesses, and a generational change in how social media is used. For a company that has grown to be so exorbitantly big, it’s vision seems to want to cultivate all that is small, whether that’s small businesses or tight-knit communities.

Calling Facebook a “millennial company,” Zuckerberg said he hopes to see social media platforms used to more frequently address millennial problems such as climate change, runaway costs of education, housing and healthcare. 

“Over the next decade, we’ll focus more on funding and giving a platform to younger entrepreneurs, scientists, and leaders to enable these changes,” Zuckerberg wrote. 

With close to 2.4 billion users around the world, Facebook also aims to rebuild smaller social media communities that give people a sense of intimacy again are also a focus for the next decade.

As with small communities, bolstering small companies is also part of the big picture.

“Over the next decade, we hope to build the commerce and payments tools so that every small business has easy access to the same technology that previously only big companies have had,” Zuckerberg wrote. 

He went on to claim that while phones will still be the primary devices throughout much of this decade, breakthrough augmented reality glasses will be the next big thing. 

“Augmented and virtual reality are about delivering a sense of presence — the feeling that you’re right there with another person or in another place,” Zuckerberg wrote. “Instead of having devices that take us away from the people around us, the next platform will help us be more present with each other and will help the technology get out of the way.”

Meanwhile, to encourage new forms of governance, Facebook is adding an Oversight Board that allows communities to govern themselves by appealing content decisions to an independent board that will have the final decision in whether something is allowed. 

“It’s rare that there’s ever a clear ‘right’ answer, and in many cases it’s as important that the decisions are made in a way that feels legitimate to the community,” Zuckerberg wrote.  

The post Mark Zuckerberg sets long-term goals for 2030 appeared first on SD Times.

via Click on the link for the full article

DeepCode reveals the top security issues plaguing software developers

DeepCode has revealed the most important bugs as well as the top security vulnerabilities. The analysis comes from the company’s AI-powered code review tool, which analyzed hundreds of thousands of open-source projects to narrow down the vulnerabilities that happen with the most frequency. 

According to the analysis, file I/O corruptions are the biggest general issue while missing input data sanitization is the top security vulnerability.

Top considerations for DevSecOps to reduce security risk
HackerOne’s top 10 security vulnerabilities

“The problems that come up with are pretty serious in file corruption, which can lead to data loss or unusable data being being processed and an application crashing the cause of it,” Boris Paskalev told SD Times. “But even worse, it can actually end up using corrupted data without knowing and the application just keeps it working such as in sectors like aeronautics and driving cars, which could be detrimental or dangerous.” 

Paskalev explained that many of these vulnerabilities are occurring because software has become drastically more complex due to the large amounts of libraries being used. In addition, there are more hackers now trying to exploit these vulnerabilities. He added that the list of vulnerabilities is not exhaustive and developers should look into ones that are tailored to their type of application. 

“The hard part is that not all developers are trained or have the time to actually spend to actually search for them and a lot of them are really tricky,” Boris Paskalev told SD Times. “Even during a normal code review uh, you can oftentimes miss some of them and the main reason is you might not necessarily be looking for this specific thing.”

According to DeepCode, the most important bugs include: 

  1. File I/O corruptions
  2. API contract violations
  3. Null references
  4. Process/threading deadlock problems
  5. Incorrect type checking
  6. Expression logic mistakes
  7. Regular expression mistakes
  8. Invalid time/date formatting
  9. Resource leaks
  10. Portability limitations

The most important security vulnerabilities include:

  1. Missing input data sanitization
  2. Insecure password handling
  3. Protocol insecurities
  4. Indefensive permissions
  5. Man-in-the-Middle attacks
  6. Weak cryptography algorithms
  7. Lack of information hiding

 “As developers enter a new year and decade, we want them to be aware of the most important coding problems for 2020 and beyond,” said Paskalev. “With DeepCode by their side, they’ll be able to make sure that these issues and countless others don’t affect their software.”

The post DeepCode reveals the top security issues plaguing software developers appeared first on SD Times.

via Click on the link for the full article

How Slack’s custom bots helped streamline Color’s lab

Slack has become a place where developers can find an app for handling almost any task, or they can build their own easily. Bear Douglas, director of developer relations at Slack, has noticed that more and more, developers are no longer just building tools for themselves, but recognizing the value they’ve received from these tools and asking what value they could provide to their whole team. 

To make building apps even easier, Slack recently released Workflow Builder, which is a WYSIWYG editor that allows users to create custom workflows to automate routine tasks. “We’ve seen that totally take off with users and the greatest thing with that as far as I’m concerned is that it means that this value that was essentially locked in our platform for developers, because you had to be conversant with how to use an API, is now starting to become something that everyone can tap into.”

RELATED CONTENT: Productivity tools are crucial in the current development landscape

One company that has created a custom bot in Slack to significantly increase productivity is Color, which is a genetics testing company. They have a fully automated lab, and developed a bot that alerts lab workers when samples have been processed.

According to Justin Lock, head of R&D at Color, a traditional lab is made up of a number of robots, with humans competing various tasks between them. But this isn’t a particularly effective way of getting things done, Lock explained. 

There are two steps that Color needed to take to improve their lab. First, they brought the robots into a small envelope, allowing them to pass things (both information and physical objects) among each other. Second, the robots need to communicate with humans to communicate what part of the process they are at so the humans know what needs to happen next. 

To accomplish this, they turned to Slack. “We were using Slack within the company to communicate between each other and so we thought, you know, we’re always on Slack anyway, is it possible to get these robots to just ping us and tell them when they’re done via Slack,” said Lock.

Lock explained that Slack’s website has a number of resources, like comprehensive instructions on how to build different bots and how to integrate with the Slack infrastructure. He added that the person on their team who created the bot wasn’t even a software engineer; he was a mechanical engineer. “He was able to write the executable, design the kill script, and within probably two days of trial development, we were able to start pinging each other via robots in the lab.”

Lock believes that the products that companies build are usually a reflection of their company culture. For Color, their goal is to provide high-quality clinical genomics and affordable, efficient, high throughput testing. “When you think about delivering health care at scale, ensuring high quality, it becomes really important to think about how you’re actually building your infrastructure from the ground up so that it’s efficient and scalable,” said Lock. “And I think historically we’ve seen that’s something that hasn’t been prioritized by a lot of organizations. And so Color has spent a lot of time and energy really thinking deeply about each step in our lab process and subsequent downstream processes to make sure that the data we’re generating is as high quality as possible and also as efficient as possible. And just generally being able to use software tools like Slack and others, just really enabled us to scale.”

The post How Slack’s custom bots helped streamline Color’s lab appeared first on SD Times.

via Click on the link for the full article

SD Times news digest: Synopsys acquires Tinfoil Security, Sisense announces $100 million funding round,and Postman updates its plans

Synopsys announced that it acquired Tinfoil Security, an innovative provider of dynamic application security testing (DAST) and API security testing solutions. 

“Tinfoil Security provides Synopsys with proven DAST technology that can be seamlessly integrated into development and DevOps workflows. Furthermore, Tinfoil Security’s innovative API scanning technology addresses an emerging demand in the market and will further differentiate the Synopsys portfolio,” said Andreas Kuehlmann, the co-general manager of the Synopsys Software Integrity Group.

The terms of the deal are not being disclosed.

Sisense announced a $100M+ funding round
Sisense announced a $100m+ funding round, bringing the company’s valuation up to $1 billion. 

The new funding will bolster the company’s sales, marketing and development efforts to increase market share, and accelerate investment in its platform. 

Sisense offers an independent analytics platform for builders to simplify complex data and build and embed analytic apps that deliver insights to everyone inside and outside their organizations. 

Postman updates its plans
Postman announced a new structure for all of its plans and pricing that will go in effect as of February 1, 2020. 

“[Usage growth] has really pushed us to expand the definition of what Postman is, going from being a tool for individual developers to also being a place for collaboration across large teams,” Postman wrote in a blog post. “As a result of this shift, we’re offering new plans to help organizations scale, grow, collaborate, and securely manage hundreds of Postman users.”

Postman Team is now $12 per month per user, Postman Business is $24 per month and Postman Enterprise pricing is available through contacting Postman. 

HERE announces new products, partnerships and platform updates 
HERE announces new products, partnerships and platform updates to extend location data and technology for developers.

The products include HERE Navigation On-Demand, a SaaS-based solution that furthers vehicle-centric navigation and HERE Lanes, a toolkit to seek making driving safer. 

HERE also announced new collaborations that include a U.S. telecom provider and a major conglomerate. The HERE Marketplace has also been expanded. 

The full details are available here.

The post SD Times news digest: Synopsys acquires Tinfoil Security, Sisense announces $100 million funding round,and Postman updates its plans appeared first on SD Times.

via Click on the link for the full article