CoreValue, with 8 development labs across Ukraine, together with other three well-known European companies: IT Kontrakt, Sevenval, and SolidBrain unite into Avenga. Together, we are one global IT company on a mission to transform business.
Avenga combines the speed, flexibility and innovative power of a start-up, but with the implementation expertise and delivery processes of an extremely experienced large provider.
“Avenga acts as an agile ecosystem, creating a collaborative platform where companies can find the exact expertise they are in search of, for their digitalization projects,” says Avenga CEO Jan Webering.
Avenga has already built long-term relationships with more than 350 clients in the pharmaceutical, healthcare, financial, automotive, and insurance sectors, as well as the real estate, media, and commercial sectors.
Avenga has supported these enterprises and large companies in their digital transformation every step of the way, from the entire digital value creation chain to strategy and user experience, on to the implementation of software and IT solutions, including hosting and operating services.
Avenga stands for global digital leadership with its roots in Europe. The company deliberately relies on flat hierarchies without fixed headquarters.
“We regard centralized corporate management as obsolete,” says Jan Webering.
“As a global IT and digital transformation champion, we disrupt the outdated and conventional IT market. We do that with the power of our experts who allow us to develop highly innovative solutions custom-made for a client. We do believe that this approach will stimulate the spread of innovation across Europe,” says Avenga CEO.
The CoreValue team has accumulated significant experience in developing innovative IT solutions for the healthcare, pharmaceutical, media, and financial industries. The CoreValue team of 480 employees located at 8 development labs in Lviv, Kyiv, Vinnitsa, Cherkasy, Khmelnitsky, Lutsk, Ivano-Frankivsk, and Poltava is becoming part of an international brand. Until spring 2020, our company will be identified as CoreValue powered by Avenga.
“This is a responsible and honorable mission to participate in the establishing of a united organization providing integrated IT services and I want to invite you all to take part in this exciting event,” said Integration Director Yuriy Adamchuk to the CoreValue team.
The employees themselves expect that the merger will have a synergy effect in terms of implementing new large-scale projects for clients through mutual exchange of experience with Avenga.
“It has to be something good, something really big that will bring synergy and transition to a new level in customer relations and business development,” said Stepan Shikerynets, an employee of the company.
“We are now Avenga. CoreValue is becoming a big, integral part of a new brand. We are not just IT, but also people: professionals like you! ”- assured Avenga Marketing Director Lilia Smirnova.
Avenga was established in November 2019 as a merger of 4 IT companies: IT Kontrakt, Solidbrain, CoreValue and Sevenval. The company provides IT consulting services and delivers strategy, customer experience, solution engineering, managed services and software products. With over 2,500 professionals on board, it supports its clients with business solutions for pharma & life sciences, insurance, finance, automotive, and real estate sectors with more to come.
With over 2,500 professionals on board, Avenga maintains a total of 18 locations in Europe, Asia and the USA. Avenga is backed by funds managed by Oaktree Capital Management L.P. and Cornerstone Partners to further expand internationally.
How to Train an AI with GDPR Limitations
In May 2018, the General Data Protection Regulation (GDPR) went into force. This European Union regulation aims to protect the privacy of EU citizens. And it affects not only Europe-based companies but all companies processing and holding the personal data of those residing in the EU. In essence, GDPR imposes new data regulations on modern technologies. But the regulation’s severe limitations are a real game-changer for AI and machine learning development. Let’s take a look at what challenges have arisen for the development of artificial intelligence in Europe with GDPR.
In this article, you’ll read about:
- Why the GDPR affects AI development
- The main challenges which arise due to GDPR limitations on AI
- How to develop GDPR-friendly artificial intelligence
Why the GDPR affects AI development
Why does the GDPR have a significant impact on artificial intelligence? This regulation touches the two main aspects of machine learning (ML). First, it enhances data security. It poses strict obligations on companies that collect and process any personal data. Machine learning is closely connected with big data. Most AI-based systems require large volumes of information to train and learn from. Usually, personal data is among these training datasets. Which means that the impact of GDPR on machine learning and AI development is inevitable.
Second, the regulation explicitly addresses “automated individual decision-making” and profiling. According to Article 22, a person has a right not to be subject to either if they produce legal effects concerning him or her. Automated individual decision-making here covers an AI’s decisions made without any human intervention. Profiling means the automated processing of personal data to evaluate certain things about the data subject. For instance, an AI system might analyze a user’s credit card history to identify the user’s spending patterns.
The GDPR provides for the right not [to] be subjected to a decision made entirely devoted by a machine, with some exceptions.
What challenges arise from GDPR limitations on AI?
How will GDPR affect AI? GDPR has six data protection principles at its core. According to a report by the Norwegian Data Protection Authority, artificial intelligence faces four challenges associated with these principles. AI development has problems fulfilling purpose limitation, data minimization, fairness, transparency, and the right to information.
Six core principles of GDPR
Source: Network ROI
- Fairness and discrimination
The GDPR fairness principle addresses fair processing of personal data: in other words, data must be processed with respect for the data subject’s interests. Also, the regulation obligates that a data controller take measures to prevent discriminatory effects on individuals. It’s no secret that many AI systems are trained using biased data. Or that their algorithmic models contain certain biases. That’s why AI systems often demonstrate racial, gender, health, religious, or ideological discrimination. To comply with GDPR, companies have to learn how to mitigate those biases in their AI systems.
- Purpose limitation
The purpose limitation principle states that a data subject has to be informed about the purpose of data collection and processing. Only then can a person choose whether to consent to processing. The interesting thing is that sometimes AI systems use information that’s a side product of the original data collection. For instance, an AI application can use social media data for calculating a user’s insurance rate. The GDPR states that data can be processed further if the further purpose is compatible with the original. If it isn’t, the data collector should get additional approval from the data subject. But this principle has a few exceptions.
Further data processing is always compatible with the previous purpose if it’s connected to scientific, historical, or statistical research. Herein lies a problem, since there’s no clear definition of scientific research. Which means that in some cases, AI development may be considered such research. The rule of thumb is that when the AI model is static and already deployed, the purpose of its data collection can’t be regarded as research.
- Data minimization
This principle controls the degree of intervention into a data subject’s privacy. It ensures that the data collected fits the purpose of the project. Collected information should be adequate, limited, and relevant. These requirements encourage developers to think through the application of their AI models. Engineers have to determine what data and what quantity of it is necessary for the project. Sometimes, this can be a challenge. It’s not always possible to predict how and what a model will learn from data. Developers should continuously reassess the type of and minimum quantity of training data required to fulfil the data minimization principle.
- Transparency and the right to information
The GDPR aims to ensure that individuals have the power to decide which of their information is used by third parties. This means that data controllers have to be open and transparent about their actions. They should provide a detailed description of what they’re doing with personal information to the owners of that information. Unfortunately, with AI systems, this may be hard to do.
That’s because AI is essentially a black box. It’s not always clear how the model makes certain decisions. Which makes it impossible to explain an AI’s complicated processes to an everyday user. Naturally, when AI is not entirely transparent, the question of liability arises.
According to the GDPR, a data subject has the right to an explanation of an automated decision. So data controllers have to figure out ways to give one.
How to develop GDPR-friendly artificial intelligence
Like it or not, IT companies have to ensure all their processes are compliant with GDPR. Data processors and data controllers who violate this regulation will have to pay significant fines. Luckily, there are several ways of making AI compliant with GDPR. Take a look at these GDPR-friendly methods of AI development.
We need to find a way to design and use machine learning algorithms in a way that is compliant with the GDPR, because they will generate value for both service providers and data subjects if done correctly.
GANs (Generative Adversarial Networks). Today, the trend in AI development is to use less data more efficiently rather than to accumulate lots of data. A GAN reduces the need for training data. Its main idea is generating input data with the help of output data. Basically, with this method, we take the input and try to figure out what the output will look like. To achieve this, we need to train two neural networks. One is the generator, the other is the discriminator.
The generator learns how to put data together to generate an image that resembles the output. The discriminator learns how to tell the difference between real data and the data produced by the generator. The problem here is that GANs still require lots of data to be trained properly. So this method doesn’t eliminate the need for training data; it just allows us to reduce the amount of initial data and generate a lot of similar augmented data. But if we use a small number of initial datasets, we risk getting a biased AI model in the end. So generative adversarial neural networks don’t solve these issues fully, though they do allow us to decrease the need for initial data.
Federated learning is another method of reducing the need for data in AI development. Remarkably, it doesn’t require collecting data at all. In federated learning, personal data doesn’t leave the system that stores it. It’s never collected or uploaded to an AI’s computers. With federated learning, an AI model trains locally on each system with local data. Later, the trained model merges with the master model as an update. But the problem is that a locally trained AI model is limited, since it’s personalized. And even if no data leaves the device, the model is still largely based on personal data. Unfortunately, this contradicts the GDPR’s transparency principle.
The AI model is personalized on the user’s phone. All the training data remains on the device and is not uploaded to the cloud.
Source: AI Google Blog
Transfer learning is a method that enables the effective reuse of prior work and leads to the democratization of artificial intelligence. In this case, the AI model doesn’t train from scratch. Instead, it takes an existing model and retrains itself using it to meet the current purpose. Since the AI model uses a pre-existing model, it takes significantly less computing resources and requires less data. But transfer learning works best when the previous model has been trained on a large dataset. Also, the previous model has to be reliable and not contain any biases. So transfer learning can minimize data use but doesn’t exclude the need for data fully.
The explainable AI (XAI) method helps to reduce the black box effect of artificial intelligence. The goal of explainable AI is to assist humans in understanding what’s happening under the hood of an AI system. With this method, an AI model can explain its decisions. It can also characterize its own abilities and give some insights about its future behavior. Explainable AI cannot directly reduce the need for data, but it allows us to understand which exact data is required to enhance model accuracy so researchers can extend the training dataset with required data only and not add a lot of meaningless data.
The simple truth is that all of these AI training methods we’ve mentioned are somewhat limited. They may comply with one GDPR principle but contradict another. This means that to train AI models properly and achieve great results, you’ll have to combine several methods.
All in all, artificial intelligence and GDPR are closely connected. The data privacy regulation affects not only the development of artificial intelligence (AI) in Europe but also any company whose AI system processes the data of EU residents.
No doubt the impact of GDPR on machine learning will be huge. Tech companies have to revise their data privacy and artificial intelligence policies. Data controllers have to ensure that their AI systems don’t violate the regulation. Luckily, there are several methods of making AI compliant with GDPR. GANs, XAI, federated learning, transfer learning, and differential privacy can help you develop a GDPR-friendly artificial intelligence system.
If you’re thinking about making your AI compliant with GDPR, contact Intellias. Our experts will help to ensure your system meets the new data privacy requirements.
FinTech Industry Leader
Innovative Product Development: The Product Leader’s Cheat Sheet
Product leaders have broad, central roles and multiple challenges
Software products live in a complex marketplace which is why experienced product leaders play such an important role in deciding on the direction of a company’s innovative product development strategy.
This role covers a range of responsibilities which involves everything from product design through to deciding on the right time to go to market. It is a key role too that requires a strong mix of technical skills to get to grips with products, soft skills to sell product ideas, and business acumen to know which products will ultimately contribute to the bottom line.
To complicate matters, product leaders are involved in countless facets of innovative product development. In their roles, product leaders must collaborate with technical departments, analyse markets and competitors and engage with customers and external partners. Getting product innovation right, therefore, requires a targeted approach.
Four tips for innovative product development success
Product leaders focused on getting the most out of product innovation strategies must touch on a plethora of points while also moving quickly. To maintain the bottom line while also satisfying stakeholders we think product leaders should concentrate on the following four points.
1. Focus on new technologies that offer the most impact
In leading with product development, product leaders should try and make the maximum impact on the market, disruptive technology is often the best instrument to do so. Whether it is artificial intelligence (AI) enhanced software product or an application that relies on the internet of things (IoT), products based on new technologies can provide a jump start.
Product managers must look past the noise and identify technologies that can provide a return in the real world. A technical feasibility study is essential because it helps weed out tech that won’t make it in the wild, instead helping product leaders to invest in product lines that stand a high chance of success.
2. Concentrate on what the user wants
Product line profitability is an important metric, but no product will succeed in the long run if it does not satisfy user demands. Innovating to meet user needs can set a new product apart from its competitors.
How can product leaders gain an understanding of customer requirements? Your data is the answer. Data analysis can help product teams redefine, reposition, improve and grow existing products. Real-time, responsive user analytics allows giving customers a contextualised product experience.
Again, technology innovation can become a gamechanger here. From augmented reality through to blockchain-based applications, innovation adoption allows product leaders to distinguish their product lines. For instance, with AI, being considered the most disruptive technology, according to Gartner’s 2019 CIO Survey, AI-enhanced products and natural language processing can create user experiences and services that are truly game-changing.
3. Use data to manage profit and loss
Pricing a product is one of the biggest challenges product leaders face. It is not merely a matter of making a product price-competitive, but also about maximising the profit, a product line delivers to the bottom line.
Understanding customer buying behaviour in detail can assist product leaders in designing pricing strategies that overdeliver. Using data science, a product leader can match the understanding of customer’s behaviour with a detailed analysis of the costs of delivering a product. This analysis can enable tailored offerings that maximise the profits derived from a product throughout its revenue-producing lifetime.
4. Maintain data security and compliance for customer’s loyalty and trust
Experienced product leaders will know that customer’s trust is a pillar of long-term product line success. For software products, data protection and meeting compliance requirements is a major part of maintaining trust. That’s why product leaders cannot drop the ball when it comes to the safety of their customer’s corporate or personal information.
Since effective innovation is heavily depending on in-depth data analysis, product leaders are well advised to put data safety and compliance front and centre. According to Gartner, 35% of businesses use multiple data security tools; this number is due to increase to 60% of organisations by 2020.
As cybersecurity threats and hacking methods become more advanced, to keep up, the latest security solutions leverage blockchain; data tokenisation, Artificial Intelligence, adaptive security approach, and other modern security techniques.
Cutting to the chase
Particularly in the software industry product leaders cannot afford to sit on their hands, the technology world moves too quickly. When handling product portfolios, these key staff members must be highly focused, balancing fast-moving technology with the changing requirements of customers while also keeping stakeholders happy.
We’ve outlined four points that product leaders should keep front and centre when developing product innovation strategies. It can be a tough task, however, and ELEKS stands ready to help with any challenging technical aspects.
Get in touch with us for an end to end assistance with product innovation.
Why software needs DXP
WHY SOFTWARE NEEDS DXP
Enterprise software companies stand to increase revenue by up to $1B within three years of investing in improved customer experience (CX)—a 30% higher return than is estimated for non-SaaS companies. To enable, attract, and retain customers through engaging experiences, software companies must focus on the too-often-neglected marketing department. Digital experience platforms (DXPs) automate and personalize marketing initiatives within software companies, but also provide rapid deployment of apps, portals, and websites, including content management.
To be classified as a true DXP requires a solution that delivers all three of the following:
- Facilitate rapid development (apps, personalized content, and more)
- Connect data and media through APIs
- Manage and store content
Let’s look at how DXPs empower the collaboration, personalization, and optimization that lead to sustained competitive differentiation.
Collaboration for scalability
Software companies are often a collection of siloed platforms, teams, and data—all working independently while learning nothing from each other. As business increases, so does data—and without proper consolidation, data disparity increases in parallel. Consumer attention is a scarce commodity, and companies must deliver data-driven personalized experiences to capture interest and engagement. However, silos will render software companies incapable of keeping pace with customer demand in the absence of synchrony.
Experience platforms empower team collaboration which assists in working in greater transparency and in a less siloed fashion. Not only do DXPs centralize and improve control, but the platforms also serve to eliminate repetitive and inefficient processes. From content to engagement, to personalization and optimization—experience platforms:
- Increase speed and reliability of work
- Improve the efficiency with which work is performed at scale
- Empower best practice maintenance from concept to delivery
To scale with customer and growth demands, DXPs are essential for internal alignment, agility, and empowering personalization that differentiates the brand from the competition.
“PERSONALIZATION CAN REDUCE ACQUISITION COSTS BY AS MUCH AS 50%, LIFT REVENUES BY 5–15%, AND INCREASE MARKETING SPEND EFFICIENCY BY 10–30%.” MCKINSEY
A recent Epsilon study shows 80% of customers are more likely to purchase a product or service from a brand that provides personalized experiences.
Experience platforms improve personalization through disparate data consolidation that enables a true 360-degree view of users and prospects. Data integration and automation is simplified through product, platform, and API connection. Batch (high volume collected over time) and streaming (continuously generated from multiple sources) data are both processed immediately to enable timely and relevant offers. With this real-time personalization at scale, software companies can eliminate the latency that occurs with batch-only systems.
Experience platforms combine both internal and external data sources to drive intelligent decision making and more accurate customer profile segmentation. Equally, if not more importantly, DXP automation frees IT teams from repetitive technical maintenance to apply more time working with engagement teams.
Software companies understand the need for continuous optimization through the product lifecycle, and this holds true for marketing as well. Once a DXP has been adopted, data silos have been consolidated, and all API connections made—the task of perpetual learning, acting, and improving on insights can begin. This is accomplished through DXP self-learning automation powered by machine learning (ML).
Digital experience platforms leverage artificial intelligence (AI) and machine learning to empower companies with a clear view of customers and prospects, and provide actionable data. As a result, individuals are no longer treated as groups, only personal and relevant engagements are delivered, and experience is improved exponentially.
However, software companies must begin the optimization process before the DXP is even integrated. First, a company must understand customers and their journeys, and where there is greatest need and opportunity for improvement through innovation. Second, all existing and potential data points must be identified. These two proactive steps will ensure that a company understands current and future states and what existing insights to leverage to move from one to the other.
Software companies exist to solve enterprise challenges, but to remain competitive and grow, businesses must also espouse technology solutions to solve internal needs. Consolidated, actionable, accessible, and scalable data is required for advanced applications like AI/ML. These drive automation and personalization, which deliver improved engagement, conversion, and retention.
At SoftServe, we have over 26 years of experience in software development with over 5,500 engineers and developers at the ready. We are agnostic experts in big data, AI/ML, and personalization and are preferred partners with leading experience platform providers. If a company hasn’t already adopted an XP, we can advise which is the best option, and for those that have, we empower our clients to ensure the experience platform is fully leveraged and perpetually optimized.
It’s crucial to choose the right DXP not only on its own merits, but on how the solution aligns with the company’s culture and vision, and the solution’s ability to fully integrate with other software. There are ample DXP solutions to choose from. Examples include:
- Adobe are industry leaders
- Sitecore are top performers for retail/B2C (where personalization and CX are king)
- Quadient is known as a leader for companies that have high volume processing demand
At SoftServe, we’re agnostic DXP experts and preferred partners with the world’s leading DXP providers. We advise which platform is best for each client and provide expertise and service to ensure ongoing optimization.
Let’s talk about where you are in your experience platform journey today.
- News3 years ago
- Articles3 years ago
How to apply
- Articles3 years ago
A BUSINESS ANALYST HACKATHON: EXPLORE AND UNLOCK NEW THINKING
- Events3 years ago
TOP 7 Ukrainian tech conferences in 2020
- Articles3 years ago
Innovative Product Development: The Product Leader’s Cheat Sheet
- Articles3 years ago
How to find, hire, and partner with an offshore development team
- Events3 years ago
MIT Solve Challenge Design Workshop at EPAM Continuum
- Articles3 years ago
Ukrainian startup Senstone launches retail sales