Information Age https://www.information-age.com/ Insight and Analysis for the CTO Wed, 28 Aug 2024 15:44:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 https://informationage-production.s3.amazonaws.com/uploads/2022/11/cropped-Information-Age_RGB_Logo-3-32x32.png Information Age https://www.information-age.com/ 32 32 What can my organisation do about DDoS threats? https://www.information-age.com/what-can-my-organisation-do-about-ddos-threats-123511748/ Wed, 28 Aug 2024 15:44:17 +0000 https://www.information-age.com/?p=123511748 By Nick Martindale on Information Age - Insight and Analysis for the CTO

DDoS attack

Nick Martindale looks into emerging DDoS attacks, what your organisation can do to reduce the threat, and the role that AI could play

The post What can my organisation do about DDoS threats? appeared first on Information Age.

]]>
By Nick Martindale on Information Age - Insight and Analysis for the CTO

DDoS attack

According to research by F5 Labs, there were an alarming 2,127 distributed denial of service (DDoS) attacks in 2023; a rise of 112 per cent compared to 2022. The damage that can be done was highlighted by the attack on Mircosoft Azure, which meant many Microsoft services were unavailable for a period of almost 10 hours in July.

Jack Smith, hive leader at cybersecurity firm CovertSwarm, says such attacks are a significant, evolving cyberthreat. “They aim to overwhelm a target’s online services by flooding them with enormous traffic, rendering the services unavailable to legitimate users,” he explains. “In simple terms, a DDoS can be achieved by exhausting the resources of the target, such as bandwidth, memory or CPU power.”

Typically, DDoS attacks are typically performed by botnets, or automated malicious code infrastructure of compromised machines, says Ken Dunham, cyber threat research director at Qualys Threat Research Unit.

“This makes it very difficult to detect and stop,” he says. “If a botnet is large, the firepower of a DDoS attack at any given target can be substantial. There are other forms of DDoS attack, such as DNS Reflection attacks, which can generate even higher amounts of DDoS impact upon availability of the targeted resource. The most common type of DDoS attack is from eCrime botnet infrastructure, often Russian in nature.”

What DDoS threats should I be looking out for?

An emerging DDoS threat is a ‘smurf’ attack, says Kevin Curran, IEEE senior member and professor of cybersecurity at Ulster University. “This relies on misconfigured network devices to allow packets to be sent to all computer hosts on a particular network via the broadcast address of the network, rather than a specific machine,” he says. “The network then serves as a ‘smurf’ amplifier. In such an attack, the perpetrators will send large numbers of IP packets with the source address appearing as the victim.”

There are various motivations behind such attacks, but the most common motive is financial gain. “Attackers often use these as a form of ransom whereby the attacker holds the impacted service to ransom until their financial demands are met,” explains Andy Grayland, CISO at cyber-threat intelligence firm Silobreaker.

“Less common, but more often heard about, is DDoS for political or strategic aims. Here, the attacker has no desire to stop the attack, but instead either wishes to completely stop the company from operating or has ‘ransom’ demands in the form of an action rather than cash payment. An example of such a motive might be demanding that a company stops trading with Israel due to the conflict in Gaza.”

What can my organisation do to reduce the threat?

There are steps that organisations can take to reduce the risk of being hit by an attack, or minimise its impact. “Businesses can prevent attacks using managed DDoS protection services or through implementing robust firewalls to filter malicious traffic and deploying load balancers to distribute traffic evenly when under heavy load,” advises James Taylor, associate director, offensive security practice, at S-RM. “Other defences include rate limiting, network segmentation, anomaly detection systems and implementing responsive incident management plans.”

But while firewalls and load balancers may stop some of the more basic DDoS attack types, such as SYN floods or fragmented packet attacks, they are unlikely to handle more sophisticated DDoS attacks which mimic legitimate traffic, warns Donny Chong, product and marketing director at DDoS specialist Nexusguard.

“Businesses should adopt a more comprehensive approach to DDoS mitigation such as managed services,” he says. “In this setup, the most effective approach is a hybrid one, combining cloud-based mitigation with on-premises hardware which be managed externally by the DDoS specialist provider. It also combines robust DDoS mitigation with the ability to offload traffic to the designated cloud provider as and when needed.”

For smaller firms, installing multilayer security solutions or monitoring network traffic to help identify bogus or fake requests is a good starting point, says Jake Moore, global cybersecurity advisor at ESET. Moving to the cloud can also help mitigate attacks, due to the higher bandwidth and resilience of the infrastructure.

“However, even with such protection, each year threat actors become better equipped and use more IP addresses such as home IoT devices to flood systems, which can make systems completely unusable,” he says. “A disaster recovery plan is therefore crucial in case of a DDoS attack – this includes having backup servers, website and alternative communications channels.”

The recent Azure incident also demonstrates the importance of regular testing for DDoS mitigation systems, says Chong. “According to Microsoft’s own Post Incident Report (PIR), the global disruption was due to an error in the installation of its DDoS mitigation defences that incorrectly amplified the attack rather than mitigating it,” he says. “Beyond the obvious tests on the effectiveness of DDoS defences, it’s imperative that businesses ensure that the systems are integrated properly in the first place.”

What about artificial intelligence?

There’s also potential for artificial intelligence (AI) to impact DDoS, both positively and negatively, in the coming years. “On the offensive side, attackers may use AI to identify and exploit vulnerabilities more efficiently, adapting attacks in real-time to evade defences,” suggests Smith. “For example, AI can be used to launch more sophisticated attacks by learning from the target’s defence systems and altering attack strategies accordingly.”

But AI is an equally powerful tool for defence. “AI-driven security solutions can analyse large amounts of data to identify patterns indicative of a DDoS attack, usually before it becomes evident,” he says. “Machine learning algorithms can differentiate between legitimate traffic and malicious activity, enabling faster and more accurate responses. AI can also help to automate responses, reducing the time needed to mitigate an attack and minimising the potential damage.”

The reality is that DDoS attacks are likely to continue for the foreseeable future, as long as unpatched systems remain online and easy-to-deploy DDoS tools exist, says Curran. In the short term, he urges companies to try to deal with DDoS traffic on the edge of their network immediately and make use of tools such as AI which can help with reactive misuse, anomaly detection and network-profiling techniques. But in the longer term, a cultural change is needed too, to cope with the growth in this and other threats. “Inevitably, this means increasing the amount of IT security staff and ensuring all staff are sufficiently trained, even if it’s just basic cyber skills to give the team confidence to identify and respond to these kinds of threats,” he says. “Ensuring that the proper roles and permissions are in place will provide additional accountability.”

Further reading

Keys to effective cybersecurity threat monitoring – A strong cybersecurity threat monitoring strategy that evolves with current and prospective threats is crucial towards long-term company-wide protection

The post What can my organisation do about DDoS threats? appeared first on Information Age.

]]>
CrowdStrike – what your organisation should do now https://www.information-age.com/crowdstrike-what-your-organisation-should-do-now-123511555/ Tue, 20 Aug 2024 13:29:08 +0000 https://www.information-age.com/?p=123511555 By Nick Martindale on Information Age - Insight and Analysis for the CTO

CrowdStrike office

Nick Martindale explores what your organisation could do to handle another incident similar to CrowdStrike

The post CrowdStrike – what your organisation should do now appeared first on Information Age.

]]>
By Nick Martindale on Information Age - Insight and Analysis for the CTO

CrowdStrike office

In July, an update put out by security firm CrowdStrike, which was sent out to around 8.5 million Microsoft Windows devices, crippled IT systems around the world. The impact was immediate, with trains and planes grounded and many organisations – including hospitals, retailers and banks – left unable to function.

The threat of a major outage is significant enough to give any IT professional sleepless nights, but this is compounded by the risk of legal action. “An IT outage that leads to service disruption might constitute a breach of service level agreements, which could result in penalties, refunds or other compensatory measures,” says Shane Maher, managing director of managed services provider Intelliworx.

“In addition, businesses in certain industries – such as professional services, healthcare or other sensitive sectors like payments processing – will have strict security standards that must be adhered to. An IT outage affecting compliance with these standards can again result in major fines and penalties.” Additional risks come from regulations such as GDPR, where organisations must meet strict guidelines around how they respond to a breach.

The reality is that, in the case of technology suppliers such as Microsoft and other large providers, organisations have relatively little control over the implementation of updates, says James Watts, managing director at Databarracks. But that doesn’t mean there’s nothing businesses can do in such situations, or lessons they can learn.

“You need to know how your organisation can continue to operate if an IT service or application fails,” he says. “In the case of systems outside your control, that will often mean manual workarounds to maintain operations.

“With many software-as-a-service products, you can take a backup of your data which serves your governance risk and compliance purposes, but you can’t run that application anywhere else. Practically, you are waiting for the supplier to bring the service back online. For cloud services at the information-as-a-service level, you have the control to build in as much resilience as you are willing to pay for. It’s a balance of cost and risk, so you chose your solution based on uptime requirements and risk appetite.”

Not the same across the board

With other updates, though, organisations have more options. “By architecting a system correctly, you can significantly reduce the risk posed by relying on services that are outsourced, or out of your control,” says Tony Hasek, CEO and co-founder of physical network isolation cyber company Goldilock. “Network segmentation, for example, is a crucial layer of protection that IT teams can implement to ensure updates or changes made by external providers are ‘accepted’ internally before being rolled out.”

An example would be an airport that uses edge devices with a user port and a separate admin port, which is used to update systems and applications. “Any updates through the admin port will be stopped and subject to internal review before being rolled out,” he says. “This prevents forced updates from external service providers.”

Adopting a staggered approach to software and configuration updates is another option. “Updates are sent first to a small pool of devices and the effects observed via telemetry,” explains James Kretchmar, SVP and chief technology officer at Akamai. “Updates then proceed to wider stages of deployment only when it’s clear the effects have been positive. Keeping small problems from becoming big problems is the name of the game.” But a recent survey – ironically by CrowdStrike – suggests that only 54 per cent of organisations review major updates to software applications.

What can we learn from the CrowdStrike incident?

There are other lessons organisations can learn from the CrowdStrike incident, particularly around how they respond to any kind of outage. “Treat the risk of failure as a ‘when’ not an ‘if’ problem,” advises Dafydd Vaughan, chief technology officer at Public Digital and co-founder of the UK Government Digital Service. “This means thinking about how you can quickly restore or recover services that are affected, and how you can minimise the disruption while that recovery happens.”

This involves both understanding what the most critical IT elements are, and simulating an attack to test the response, says Adam Stringer, a digital resilience consultant at PA Consulting. “In the heat of an outage, you need to understand which services to focus efforts on, including technology, process and suppliers that support those services,” he says. “Simulating an outage – be it cyber-attack, failed change or supply chain failure – will mean you’re better prepared when the unthinkable happens.”

There are other steps organisations can take, to reduce their exposure to particular services. “In this case, having failover systems that leveraged Apple or Linux software could have prevented the outage, or at least significantly reduced the downtime,” says Wes Loeffler, director of third-party risk management at Fusion Risk Management. “In the case of the CrowdStrike Falcon endpoint security software, there are alternatives that are frequently used, such as Cisco Endpoint Security, SentinelOne, Trellix and others.”

Yet while these may provide more control, there are often trade-offs which make them unsuitable for most organisations, says Watts. “Using less popular technologies raises different challenges with skills, support and interoperability,” he contends.

“The cloud market is an oligopoly, which introduces concentration risk challenges because of the interdependency of the supply chain. You may not use AWS directly, but inevitably – whether it’s your suppliers or SaaS providers – someone in your supply chain does. It’s now even more important to look at your supply chain and downstream dependencies.”

Vaughan points out that all major cloud providers have faced major outages in the last year, and argues the bigger picture is one of more frequent and disruptive cyber-attacks. “At the same time, our services are becoming ever more interconnected,” he says.

“This is powerful and brings extraordinary value to businesses, but it can also mean that isolated problems cascade into waves of impact far beyond their initial blast zone. An incident like CrowdStrike, whether deliberate or accidental, will happen again. Businesses now must plan how they will handle it.”

There are lessons here for cloud providers, too, says Aron Brand, CTO of CTERA. “The cloud industry needs to aim for a new benchmark: space-grade reliability,” he asserts. “Space technology, designed to operate in the most unforgiving environments with minimal opportunity for physical intervention, represents the pinnacle of reliability engineering. By aspiring to this standard, cloud providers can push themselves to implement more rigorous testing, redundancy and fail-safe mechanisms that go far beyond current practices.”

Read more

Why the next Ashley Madison is just around the corner – Jason Haworth, Chief Product Officer at Botguard, urges businesses to take steps to avoid falling victim to the next big data breach

Keys to effective cybersecurity threat monitoring – A strong cybersecurity threat monitoring strategy that evolves with current and prospective threats is crucial towards long-term company-wide protection

Why diversity matters when recruiting cybersecurity staff – Putting diversity at the heart of your cybersecurity team helps you spot issues and problems that might not have occurred to you

The post CrowdStrike – what your organisation should do now appeared first on Information Age.

]]>
Why the next Ashley Madison is just around the corner https://www.information-age.com/why-the-next-ashley-madison-is-just-around-the-corner-123511502/ Thu, 15 Aug 2024 16:14:06 +0000 https://www.information-age.com/?p=123511502 By Jason Haworth on Information Age - Insight and Analysis for the CTO

Data breach

Jason Haworth, Chief Product Officer at Botguard, urges businesses to take steps to avoid falling victim to the next big data breach

The post Why the next Ashley Madison is just around the corner appeared first on Information Age.

]]>
By Jason Haworth on Information Age - Insight and Analysis for the CTO

Data breach

Last month, news broke that Ticketmaster had fallen victim to a catastrophic data breach, with personal information from 560 million customers held for ransom.

Only days earlier, the BBC confirmed a breach that left data from 25,000 current and former employees exposed. As the prevalence and sophistication of data breaches grow, so does public awareness of the issue. Phrases like ‘data breach’ might once have been consigned to security teams in backrooms, but have now become household phrases – evidenced by the popularity of Netflix’s recent documentary, on the infamous Ashley Madison data breach.

Unfortunately, it’s not a matter of ‘if’ another huge data breach will occur – it’s simply a matter of when. Today organisations of all sizes, not just the big players, have a ticking time bomb on their hands with the potential to detonate their brand reputation and destroy customer loyalty.

How can companies get ahead of falling victim to ‘the next big data breach’?

Why do data breaches occur?

We can trace most data breaches back to one of a few initial causes. In most instances, data breaches are carried out by hackers – who can be lone operators or acting as part of an organised ring. These hacks are usually financially motivated, with those responsible stealing credit card numbers, bank accounts and other financial information – or selling stolen personally identifiable information (PII) on the dark web.

The global average cost of a data breach is rising according to IBM – there’s an estimated $4.45 million at stake – and with it, the incentive for cyber criminals to carry out such attacks is also rising. The scale of impact from just one single data breach can be immense. One of the largest data compromises within the last year involved MOVEit, a file transfer software tool, had an estimated 72.7 million victims.

Big players like Ticketmaster, BBC and Ashley Maddison are understandably a prime target for hackers seeking financial gain, but attacks of this kind can impact anyone. It ultimately comes down to how much friction cyber criminals will encounter when targeting a particular organisation, their goal being to reap the greatest reward with the least amount of effort.

Due to a lack of dedicated cybersecurity teams and finite financial resources to allocate to protective measures, small organisations will often prove easier to successfully infiltrate when compared to the average big player.

The potential reward from a single attack may be smaller, but hackers can combine successful attacks against multiple SMEs to match the financial gain of successfully hacking a large organisation, and with far less effort. SMEs are therefore increasingly likely to fall victim to financially crippling attacks, with 46% of all cyber breaches now impacting businesses with fewer than 1,000 employees.

How are these attacks carried out?

One common attack vector is stolen or compromised credentials – gained via brute force attacks. Another is gaining access to a target network by exploiting weaknesses in websites, operating systems, endpoints, APIs and common software. When hackers locate a vulnerability, they can then plant malware in the network.

For both forms of attack, the rate of success has been significantly accelerated by the use of bots by cybercriminals in recent years. Bots can be used to overload networks at a much faster rate for brute force attacks, and probe websites for weaknesses that can then be exploited at a superhuman rate.

A sign of the rising costs associated with cyber breaches is the increase in cyber insurance premiums from 2023 to 2024. For larger enterprises, having comprehensive cyber insurance is now widely seen as a cost they have to incur in order to do business. For smaller organisations, the ability to absorb the increased cost of cyber insurance will always be more difficult to balance.

How every breach starts

What all breaches and attacks have in common is the initial scanning of possible victims, be it targeted scanning for high profile and high volume companies or just broad scanning across the internet.

The very first step in any attack chain is always the use of tools to gather intelligence about the victims systems, version numbers of (not patched) software in use and insecure configuration or programming. Any hacker, whether a professional or amateur, is using scanning bots or relying on websites like Shodan.io, generating an attack list of victims with vulnerable software. Anything you operate with internet connectivity is highly likely to have been scanned at least once within the last 24 hours.

Getting ahead of the breach

All organisations, from SMEs to multi-billion pound companies like Ashley Maddison and Ticketmaster, must ensure they’re not an easy target for hackers. As the attack on Ashley Maddison demonstrated, the ramifications of a successful attack often go far beyond financial consequences if users of your website have entrusted you with their data. It’s the organisation’s individual responsibility to deliver on the promise to adequately protect its users’ data.

The fewer resources SMEs have at their disposal to build resilient web infrastructure, the greater their chances of becoming a target. But that doesn’t mean that a strong resistance – enough to deter hackers – can’t be created.

Resilient web infrastructure can be built in a number of different ways. Constructing the right toolkit is a good starting point. This includes using data security tools to apply encryption, putting incident response plans in place, improving employee training, and adopting more rigorous approaches to web traffic management to keep malicious traffic off your website before it can ever strike.

Finally, it’s important to remember that it ultimately comes down to strategy, not resources. Even the big players, like Ashley Maddison who had all the resources in the world to prevent a hack, still fell down. The fatal flaw will always be pretending the risk doesn’t exist. The ‘next Ashley Maddison’ may be right around the corner, but by taking the time to identify specific vulnerabilities and devise a strategy to safeguard them against hackers, it’s far less likely to be your organisation hitting the headlines next.

The post Why the next Ashley Madison is just around the corner appeared first on Information Age.

]]>
Only 1 in 8 workers are currently equipped to work in greentech https://www.information-age.com/only-1-in-8-workers-are-currently-equipped-to-work-in-greentech-123511480/ Wed, 14 Aug 2024 10:26:33 +0000 https://www.information-age.com/?p=123511480 By Aoibhinn McBride on Information Age - Insight and Analysis for the CTO

Greentech concept

Jobbio's Aoibhinn McBride explains the importance of having skillsets for greentech and how you can upskill accordingly

The post Only 1 in 8 workers are currently equipped to work in greentech appeared first on Information Age.

]]>
By Aoibhinn McBride on Information Age - Insight and Analysis for the CTO

Greentech concept

While proficiency in generative AI has grabbed the most headlines ever since ChatGPT burst onto the scene in November 2022, greentech is increasingly being recognised as one of the most vital sectors for the future.

This is undoubtedly thanks in part to AI itself – it’s estimated that the carbon footprint of training a single LLM in 2019 generated around 300,000kg of CO2 emissions, which is the equivalent of 125 round-trip flights between New York and Beijing.

When you put this into perspective alongside a recent UN report which highlights that the next decade is critical in combating the destructive effects of climate change, it’s easy to understand why “green talent” – workers equipped with environmentally-focused skill sets – are more necessary than ever and will play a key role in this effort.

3 jobs hiring in the UK

Key sectors in the green talent movement

So, what is green talent exactly and can anyone with an interest in environmental matters pivot to the sector straight away? Not quite.

To be considered green talent, workers must demonstrate skills that align with environmental principles and attitudes and be capable of actively promoting a sustainability-focused approach when developing business strategies or implementing new technological tools and products.

Three sectors in particular have been earmarked as essential in the fight against climate change: energy production, transportation, and finance.

While energy production is an obvious priority, its scope goes far beyond offsetting AI’s footprint, with electric vehicles (EVs) also in sharp focus thanks to the Sunak government’s hotly debated zero emissions vehicle mandate, which was launched in January 2024.

Meanwhile, the finance sector is also undergoing a green transformation, with many fintech companies leading the charge.

Examples include Stripe Climate, a service that finances tech initiatives dedicated to reducing carbon emissions through a carbon removal purchase tool, and TreeCard, a UK-based fintech that plants trees and offers customer rewards for every purchase made using its wooden debit card.

As a result, skills in carbon accounting, carbon credits, emissions trading, and sustainability reporting are becoming some of the fastest-growing green skills for finance professionals in the UK.

Job titles to watch out for include chief sustainability officers, sustainable supply chain managers, and eco-designers.

Preparing for the future

Data from the World Economic Forum reveals that workers with green skills are nearly 30 per cent more likely to be hired than those without.

Despite the clear opportunities, the statistics surrounding green talent are troubling. Recent data indicates that only one in eight workers is equipped to work in greentech.

Additionally, the demand for green skills is outpacing supply. Just 12.3 per cent of the workforce possesses green skills, while 22.4 per cent of job postings require at least one green skill. Consequently, those with green skills are defying the current tech downturn, as greentech jobs grew by 15.2 per cent between February 2022 and February 2023.

If you’re interested in greentech but worry you lack the necessary eco credentials, there are several ways to effectively upskill.

3 jobs to apply for

The Institute of Sustainability Studies, located near Dublin’s docklands tech hub, offers self-paced online courses in business sustainability, corporate sustainability reporting, and sustainability plan development.

For a more in-depth learning opportunity, the University of Cambridge runs an eight-week online course covering topics such as developing a sustainable mindset, sustainability leadership, and maintaining competitiveness by integrating sustainability strategies.

And for those seeking a shorter option, London Business Training & Consulting offers a six-hour Green Strategy and Sustainability course, which is classroom-based and takes place over one day.

Whether you’re looking for a role in greentech or the wide tech industry, head to the Information Age Job Board today

Read more

4 smart cities using tech to improve sustainability – As local leaders from urban areas worldwide race to support growing populations and tackle climate change, connected data will be key to their success, says Lisa Wee, global head of sustainability of AVEVA

Trends in data centre sustainability – The power-hungry data centre industry has pledged to go net-zero by 2030. We talk to data centre designers and operators about the latest trends in data centre sustainability

How organisations can use AI to drive sustainability efforts – Peter Weckesser, chief digital officer at Schneider Electric, discusses how AI can help businesses drive sustainability efforts

The post Only 1 in 8 workers are currently equipped to work in greentech appeared first on Information Age.

]]>
The evolution of the CTO – from tech keeper to strategic leader https://www.information-age.com/the-evolution-of-the-cto-from-tech-keeper-to-strategic-leader-123511451/ Mon, 12 Aug 2024 11:48:08 +0000 https://www.information-age.com/?p=123511451 By Rohan Patel on Information Age - Insight and Analysis for the CTO

The strategic role of a CTO

Rohan Patel of builder.ai explains how the role of the Chief Technology Officer (CTO) is changing and what you can do to succeed

The post The evolution of the CTO – from tech keeper to strategic leader appeared first on Information Age.

]]>
By Rohan Patel on Information Age - Insight and Analysis for the CTO

The strategic role of a CTO

The tech industry is a whirlwind of constant change. Over the last decade, the sector has evolved at an unprecedented pace, often leaving both consumers and businesses struggling to keep pace. And at the forefront of this digital revolution stands the Chief Technology Officer (CTO).

The role of the CTO has changed significantly over the years, and it requires a lot more than keeping up with the latest tech trends. Gone are the days when the CTO was only looking after the tech stack. Today, CTOs work hand in hand with CEOs and they’re expected to be strategic leaders who can help their organisations navigate the complex world of technology and drive innovation.

From backroom to boardroom

CTOs have experienced a huge shift in how they are positioned in the workplace. They are no longer part of a small-medium size team that operates separately from the rest of the business; they are the key to tangible business growth and perhaps one of the most crucial parts of a leadership team.

The main duty of CTOs is to maintain – and where available, to modernise – tech, and to decide when something has kicked the bucket and no longer has a purpose. These things require people power, specialist skills and money. Needless to say, the investment in the role is vital.

Tech leaders often feel burnt out, or worried that they don’t have the resources and support needed to do their job well. In fact, for many tech executives, the global shortage of software developers has impacted their productivity and ability to implement key technology initiatives, like business digitalisation.

The talent crunch

Despite the shortage in software developers, CTOs should not be hired as a cover-all solution, and companies should always seek to hire a separate Lead Developer role.

For example, the CTO is responsible for the entire technical team across the business, while the Lead Developer looks after a smaller team of developers. The CTO also requires an in-depth understanding of what companies need in order to undergo digital transformation and keep up to date with tech processes, whereas the Lead Developer needs a practical understanding of development and can align teams to produce a minimum viable product.

Both of these roles are highly sought after in the tech industry, and although CTOs often have some background in development and programming, they exist as two separate roles for good reason, and they should be hired as such.

Managing scope creep

Scope creep has a bad rep, and that’s because a lot of traditional development solutions are not set up for changes to be made beyond the agreed scope. In an ideal world, the security and predictability provided by a well-defined project scope would be invaluable; however, our reality is far from perfect.

The saying goes, “You can never set foot in the same river twice,” and the same is true for leaders in tech – everything evolves from the moment you start working on a project. There is much to appreciate about technology that remains stable and adaptable when changes are necessary during development. Today, innovative CTOs are on the lookout for software solutions that come with the flexibility of making that important U-turns if ever needed.

The customer is in the driver’s seat

Today, for innovative CTOs, it’s all about being in touch with the customer journey and keeping up with customer needs as they continue to change. You may have heard how digital transformation is driving a new customer experience, but has it crossed your mind that it might be the other way round?

Although it might seem like a ‘what came first, the chicken or the egg?’ scenario, the numbers suggest that the digital market is being led by consumer behaviour.

Optimising the balance sheet

Technological transitions are unpredictable, but with the right outlook and determination, CTOs can identify investments that will promote development and success, allowing them to confidently navigate the unknown.

Nowadays, CTOs are looking for the most effective initiatives to keep their teams productive and engaged for years to come. And during the process, they will gain essential expertise in cost management while trailblazing new technology.

In fact, CTOs can regulate expenditures in various ways. For example, they can move to third-party software support and consider managed services. They should also be aware of the software that teams use within the company and employ sustainable cloud management strategies. All of this will aid CTOs in becoming the guiding light to the business’ digital journey.

Building the foundation for success

In the midst of the current AI revolution, CTOs must ensure that their data pipelines are streamlined.

In fact, data pipelines require adequate training and preparation to allow AI-assisted tools to work best. This involves standardising service management methods and developing a repository of structured content that can be trusted as a data source.

Consider data architecture to be the blueprint for your AI command centre. It’s critical to examine the organisation’s readiness and determine the business use cases best suited for AI. This includes assessing the team’s abilities, the tools at their disposal, and the operational processes required to produce accurate data and efficiently design and deploy models.

Your data pipelines will only be ready to train AI models that can unleash new business insights and possibilities if you have the proper foundation and infrastructure in place.

Moreover, enterprise chatbots can provide staff with real-time access to data and optimise procedures to increase data quality. These chatbots are adaptable and scalable, accepting new data sources and increasing business requirements.

Moving forward, CTOs will continue to face challenges such as managing scope creep, optimising spending, and driving customer-led digital changes. However, their ability to implement AI effectively will be the key differentiator. Those who succeed will set themselves apart, leading their organisations to new heights in the digital age.

Rohan Patel is the SVP of engineering at Builder.ai.

Read more

CTO salary – how much can you earn where? – How much you can earn as a CTO depends on the size of company you work for and where. But Information Age has gleaned what the average CTO salary is around the world

What is the role of a CTO in a start-up? – The role of the chief technology officer (CTO) can be vital to the much needed innovation of any start-up. Here, we explore what this entails

What’s the role of the CTO in digital transformation? – The role of the CTO is often all-encompassing when it comes to digital transformation. Simon Wakeman explains how they can achieve success

The post The evolution of the CTO – from tech keeper to strategic leader appeared first on Information Age.

]]>
Is HIPAA enough to protect patient privacy in the digital era? https://www.information-age.com/hipaa-patient-privacy-123511410/ Mon, 05 Aug 2024 14:54:52 +0000 https://www.information-age.com/?p=123511410 By Sadie Williamson on Information Age - Insight and Analysis for the CTO

It’s well known that HIPAA was developed and enacted in order to protect patient privacy, but is it enough in 2024?

The post Is HIPAA enough to protect patient privacy in the digital era? appeared first on Information Age.

]]>
By Sadie Williamson on Information Age - Insight and Analysis for the CTO

Today’s reality of health apps, fitness trackers, health monitoring devices, and cyber attacks were unimaginable at the time. Despite repeated updates, it’s not clear whether the protections that HIPAA offers are robust enough to effectively safeguard patients from the many threats they face.

HIPAA loopholes allow digital data abuse

Back in December 2022, the Office for Civil Rights (OCR), which enforces HIPAA, raised the alarm about pixel trackers. Many healthcare organizations, including leading hospitals, use ads and website analytics solutions that include pixel trackers from companies like Google and Meta. The OCR warned that these pixels could violate HIPAA if they expose patients’ protected health information (PHI).

These pixels might only be embedded in public-facing content pages, but they still collect identifying information about the viewer, such as their geographic locations and/or IP address. The OCR pointed out that if the visitor goes on to view a page about AIDS medications, cancer treatments, or psychiatric care, for example, the pixel has now collected identifying information that might be related to their health issues.

Some healthcare providers bristled at this warning and sued the OCR for overreach. Their case was upheld by a federal judge in Texas earlier this summer, which means that HIPAA is now ineffective in protecting patients from pixel trackers on websites. From an enforcement perspective, it also doesn’t help that the HHS hasn’t audited any covered entity for HIPAA compliance since 2017, due to budget shortages.

The same concern applies to email newsletters from healthcare providers. These messages usually include the recipient’s name, always their email address, and sometimes their geographic location or healthcare areas of interest too. “Think about this, there’s a patient or a non-patient who signs up and is receiving your general email newsletter, and they click on a link, that click is tracked, and that click is tied back to an individual who signed up and if they clicked a link around receiving a mammogram, well, you’ve now got PHI,” points out Paubox’s Dean Levitt.

HIPAA doesn’t apply to law enforcement data acquisition

It’s important to note that even if HIPAA was applied perfectly to every relevant entity, it doesn’t permit healthcare providers to withhold information from law enforcement requests. In today’s highly politicized climate, this could result in significant harm to patients.

For example, since Roe v. Wade was overturned, restrictive anti-abortion laws in many states mean that patients who get an abortion, and professionals who provide one, could face criminal proceedings. Even women who have a miscarriage could be accused of getting an illegal abortion.

Patient data that reveals the timing of menstrual periods, medication that was prescribed, and other symptoms could all be crucial in these legal proceedings. The Final Rule, which becomes law towards the end of 2024, was passed to rectify this, but it only relates to women who travel to get an abortion in a state where it’s legal. It doesn’t protect a woman who has a miscarriage from having her private health data used in court against her.

Poor cybersecurity is a patient care issue

HIPAA requires covered entities to establish strong data privacy policies, but it doesn’t regulate cybersecurity standards. HIPAA was deliberately designed to be tech agnostic, on the basis that this would keep it relevant despite frequent technology changes. But this could be a glaring omission.

For example, Change Healthcare, a medical insurance claims clearinghouse, experienced a data breach when a hacker used stolen credentials to enter the network. If Change had implemented multi-factor authentication (MFA), a basic cybersecurity measure, the breach might not have taken place. But MFA isn’t specified in the HIPAA Security Rule, which was passed 20 years ago.

Cybersecurity in the healthcare industry falls through the cracks of other regulations. The CISA update in early 2024 requires companies in critical infrastructure industries to report cyber incidents within 72 hours of discovery. However, this doesn’t include insurance companies, health IT providers and labs or diagnostics facilities.

“Crucially, there are many third-parties in the healthcare ecosystem that our members contract with who would not be considered ‘covered entities’ under this proposal, and therefore, would not be obligated to share or disclose that there had been a substantial cyber incident – or any cyber incident at all,” warns Russell Branzell, president and CEO of CHIME.

Mobile apps are a weak link

HIPAA’s Privacy Rules don’t apply to most health apps, because they aren’t considered covered entities. There are so many apps that collect sensitive health data, including nutrition apps, period tracking apps, fitness apps, sleep apps, and mental health apps. Many of them are installed in smartphones by default, and far too many fail at data privacy and security.

In 2022, Mozilla investigated 27 mental health apps. All but two of them failed to meet even the most basic data privacy and security requirements. For example, they had no clear policies on what they do with your data, who they share it with, how long they store it, or how they protect it from hacking attempts. Few apps promised to delete data upon request. When Mozilla revisited the issue in 2023, just six apps showed significant improvement.

The situation is highlighted by high-profile data privacy abuses. For example, the FTC fined mental health app BetterHelp $7.8 million for sharing sensitive user data with advertisers, despite promising not to do so. Talkspace, a platform for people to communicate with licensed therapists, was found to have routinely reviewed and mined user conversations for business insights and data for training AI bots.

Many apps justify sharing data by claiming that it’s “anonymized.” Even if data is anonymized as promised, it can still be connected to you when it’s combined with other information. The more data points in a dataset, the easier it is to de-anonymize PHI.

“Although these apps undoubtedly make mental health services more convenient, they also generate massive amounts of sensitive data, and therefore have raised serious concerns over patient privacy,“ writes Darrell M. West, senior fellow at the Center for Technology Innovation.

The harm of poor data privacy regulation

Weak HIPAA provisions could result in serious patient harm. Many healthcare companies conduct wide-ranging intake questionnaires, covering gender identity, sexual orientation, and mental health history. At its most benign, this could be monetized to target ads. More worryingly, mental health data could be used by employers to vet new hires, or even by extremist groups or abusers looking for vulnerable people to target.

With HIPAA falling short in many ways, it’s vital for service providers of all types to keep an eye on where legislation is headed – and how they can future-proof their businesses by prioritizing patient data privacy regardless of legislation.

The post Is HIPAA enough to protect patient privacy in the digital era? appeared first on Information Age.

]]>
Fully Homomorphic Encryption (FHE) with silicon photonics – the future of secure computing https://www.information-age.com/fully-homomorphic-encryption-fhe-with-silicon-photonics-the-future-of-secure-computing-123511276/ Tue, 23 Jul 2024 15:51:21 +0000 https://www.information-age.com/?p=123511276 By Nick New on Information Age - Insight and Analysis for the CTO

FHE concept

Nick New, CEO and founder of Optalysys, walks us through the opportunities and challenges in implementing Fully Homomorphic Encryption (FHE)

The post Fully Homomorphic Encryption (FHE) with silicon photonics – the future of secure computing appeared first on Information Age.

]]>
By Nick New on Information Age - Insight and Analysis for the CTO

FHE concept

As data breaches and cyberattacks become increasingly sophisticated, traditional encryption methods face unprecedented threats.

The rise of quantum computing also poses a significant risk to current encryption methods, which could be rendered obsolete by the computational power of quantum. Additionally, the exponential growth in machine learning and artificial intelligence heightens the need for secure computing, as these technologies rely heavily on vast and high quality datasets.

If compromised data is fed into AI models, the resulting outputs will also be compromised, therefore, ensuring the quality, integrity and accuracy of data, in addition to its volume, is critical. Fully Homomorphic Encryption (FHE) offers a way forward, poised to transform how we handle and share sensitive data.

The data dilemma: protecting vs utilising 

Data is often referred to as the most valuable global asset. However, its true value is only realised when used to make informed decisions – be it improving operational efficiency, developing products or understanding societal trends. Organisations are increasingly seeking ways to optimise this value through new technologies such as AI, ML and data collaboration. However, valuable data often remains siloed within organisations and the most valuable data is usually the most sensitive.

Data breaches by criminal organisations can also have devastating consequences, not only for the organisation, but for the individuals whose personal data has been stolen. This data must be kept confidential and shared only with trusted parties. However, the need for collaboration introduces tension between the benefits of data sharing and the risks to confidentiality.

Encryption is typically applied to sensitive data only when data is being moved or stored. To process data, it typically needs to be decrypted first, exposing it to risks. This presents a dilemma – protecting data and limiting its use or utilising the data and increasing exposure to breaches.

FHE resolves this tension by enabling encrypted data to be computationally processed. Data can be shared without ever being exposed or vulnerable, making it useless to attackers even if intercepted. FHE is ushering in a new era of secure computing and supporting the new data economy by allowing multiple parties to work on the data without ever actually accessing it.

Challenges of FHE and the potential of silicon photonics

Despite its immense potential, FHE has faced significant adoption challenges, primarily due to its substantial computing power requirements and the inefficiencies of traditional electronic processing systems. FHE requires specialist hardware and considerable amounts of processing power, leading to high energy consumption and increased costs. However, FHE enabled by silicon photonics — using light to transmit data — offers a solution that could make FHE more scalable and efficient.

Current electronic hardware solutions systems are reaching their limits, struggling to handle the large volumes of data and meet the demands of FHE. However, silicon photonics can significantly enhance data processing speed and efficiency, reduce energy consumption and lead to large-scale implementation of FHE. This can unlock numerous possibilities for data privacy across various sectors, including healthcare, finance and government, in areas such as AI, data collaboration and blockchain. This could potentially lead to significant progress in medical research, fraud detection and enable large scale collaboration across industries and geographies.

The path to widespread adoption

The Covid-19 pandemic highlighted the real-world outcomes when organisations collaborate effectively for a shared goal. Vaccine development, typically a lengthy process, was accelerated through big pharma companies working together. For example, the partnership between BioNTech, Fosun Pharma, and Pfizer led to the rapid development of the widely distributed Pfizer-BioNTech vaccine. This involved sharing large amounts of unique and valuable information, including biomedical data and trial results – often without formal agreements in the early stages. However, this also highlighted the risk of compromising sensitive information and the need for better tools to ensure data security and confidentiality.

Privacy Enhancing Technologies (PETs) have traditionally been complex and challenging to deploy. However, FHE stands out by its ability to maintain full cryptographic security, which ensures data remains protected against unauthorised access during processing. This allows data scientists and developers to run data analysis tools on sensitive information without ever seeing or compromising sensitive data. While implementing FHE presents challenges for users without cryptographic skills, modern FHE software tools are making it increasingly accessible without requiring deep cryptographic knowledge. Additionally, regulatory environments are evolving to support widespread FHE adoption. Guidance from bodies like the Information Commissioner’s Office (ICO) and regulatory sandboxes in regions like Singapore are supporting the development of FHE. Its applications are vast, spanning government-level data protection, cross-border financial crime prevention, defence intelligence exchange, healthcare collaboration, and AI integration.

In healthcare, for example, FHE can enable secure analysis of patient data, supporting advanced research while ensuring patient data remains confidential. Financial institutions can perform secure computations on encrypted data for risk assessments, fraud detection, and personalised financial services. Government and defence companies can also enhance national security with secure communication and data processing in untrusted environments. Additionally, FHE allows for the secure training of machine learning models on encrypted data, combining AI’s power with data privacy.

The future of data security with FHE enabled by silicon photonics

FHE is set to transform the future of secure computing and data security. By enabling computations on encrypted data, FHE offers new levels of protection for sensitive information, addressing critical challenges in privacy, cloud security, regulatory compliance, and data sharing. While technical challenges remain, advancements in FHE technology are paving the way for its widespread adoption.

As we continue to generate and rely on large amounts of sensitive data to solve some of society’s biggest challenges, FHE enabled by silicon photonics provides a secure and efficient solution that ensures data can be used and remain confidential. The future of secure computing is one where organisations can do more with their data, either through secure sharing or processing — unlocking its full potential without compromising privacy.

Nick New is the CEO and founder of Optalysys.

Read more

Why data isn’t the answer to everything – Splunk’s James Hodge explains the problem with using data (and AI) in helping you make key business decisions

Data encryption: what can enterprises learn from consumer tech? – Siamak Nazari, CEO of Nebulon, discusses the data encryption lessons that enterprises can learn from consumer tech

The post Fully Homomorphic Encryption (FHE) with silicon photonics – the future of secure computing appeared first on Information Age.

]]>
What is AI-SPM (AI Security Posture Management)? https://www.information-age.com/what-is-ai-spm-ai-security-posture-management-123511079/ Mon, 15 Jul 2024 16:02:20 +0000 https://www.information-age.com/?p=123511079 By Partner Content on Information Age - Insight and Analysis for the CTO

AI Security Posture Management

Find out about the ways that AI Security Posture Management (AI-SPM) can safeguard artificial intelligence within your organisation

The post What is AI-SPM (AI Security Posture Management)? appeared first on Information Age.

]]>
By Partner Content on Information Age - Insight and Analysis for the CTO

AI Security Posture Management

AI security posture management encapsulates a holistic strategy to safeguard the security and reliability of artificial intelligence and machine learning systems.

This multifaceted approach encompasses ongoing surveillance, evaluation, and enhancement of the security stance concerning AI models, data, and infrastructure. Within AI-SPM lies the critical tasks of pinpointing and rectifying vulnerabilities, misconfigurations, and plausible threats linked to AI utilisation, alongside guaranteeing adherence to pertinent data privacy and security mandates.

AI-SPM explained

Within cybersecurity environments where artificial intelligence (AI) holds significant importance, AI security posture management (AI-SPM) emerges as a crucial element. The presence of AI systems, including machine learning models, large language models (LLMs), and automated decision systems, introduces distinct vulnerabilities and potential attack vectors. AI SPM tackles these challenges by offering tools for monitoring, evaluating, and mitigating the risks linked to AI elements within technological frameworks.

Data governance

Legislation oriented towards AI enforces stringent regulations concerning AI and customer data utilisation within AI applications, demanding enhanced governance capacities beyond the norm in most organisations. AI security posture management (AI-SPM) scrutinises the data origins utilised for training and establishing AI models to pinpoint and categorise sensitive or regulated data, including customers’ personally identifiable information (PII), that could potentially be disclosed through the results, records, or engagements of compromised models.

Runtime detection and monitoring

AI-SPM consistently monitors user interactions, cues, and inputs to AI models (such as large language models) to uncover misuse, excessive prompts, unauthorised access attempts, or unusual activities related to the models. It reviews the outcomes and records of AI models to pinpoint possible cases of sensitive data exposure.

Risk management

AI-SPM empowers organisations to detect weaknesses and misconfigurations within the AI supply chain that could result in data breaches or unauthorised access to AI models and resources. This advanced technology meticulously outlines the entire AI supply chain, encompassing source data, reference data, libraries, APIs, and pipelines driving each model. Subsequently, it conducts an in-depth analysis of this supply chain to pinpoint any incorrect encryption, logging, authentication, or authorisation configurations.

Compliance and governance

As regulations on AI utilisation and customer data, such as GDPR and NIST’s Artificial Intelligence Risk Management framework, continue to expand, AI-SPM plays a crucial role in assisting organisations in policy enforcement, audit trail upkeep, which involves tracking model lineage, approvals, and risk acceptance criteria, and in attaining compliance by linking human and machine identities with access to sensitive data or AI models.

Discovery and visibility

The absence of an AI inventory can result in shadow AI models, non-compliance issues, and data breaches facilitated by AI applications. AI-SPM enables organisations to identify and manage a repository of all AI models utilised within their cloud setups, including the relevant cloud resources, data origins, and data pathways utilised in training, optimising, or deploying these models.

Risk response and mitigation

When urgent security events or policy breaches are identified within data or the AI infrastructure, AI-SPM supports quick response processes. It grants insight into the situation and key stakeholders involved in addressing and resolving the identified risks or misconfigurations promptly.

Endnote

Incorporating AISPM as a foundational element within the MLSecOps framework marks a pivotal move towards ensuring AI technologies’ secure, compliant, and ethical advancement. By embracing AISPM methodologies with the backing of the Protect AI platform, organisations can confidently manage the intricacies associated with AI and ML technologies.

Read more

3 ways AI is set to transform the energy sector – AI will play a part in improving the customer experience and reducing carbon emissions in 2024, says Zoa CTO Crystal Hirschorn

The post What is AI-SPM (AI Security Posture Management)? appeared first on Information Age.

]]>
Does gender matter when it comes to staying in or leaving a job? https://www.information-age.com/does-gender-matter-when-it-comes-to-staying-in-or-leaving-a-job-123510924/ Wed, 10 Jul 2024 09:18:04 +0000 https://www.information-age.com/?p=123510924 By Amanda Kavanagh on Information Age - Insight and Analysis for the CTO

Cross-industry research reveals women and men share similar priorities when deciding whether to stay in or leave a job

The post Does gender matter when it comes to staying in or leaving a job? appeared first on Information Age.

]]>
By Amanda Kavanagh on Information Age - Insight and Analysis for the CTO

A new report sheds light on the differing priorities between men and women when it comes to job satisfaction and retention.

The Why Women Leave report, produced by Encompass Equality, offers a comprehensive look at the factors influencing women’s decisions to stay with or leave their employers.

With cross-industry research conducted in 2023 with 4,000 women, and updated in May 2024 to include data from 1,400 men, the findings challenge some traditional assumptions about gender-specific workplace issues.

3 tech roles hiring across the UK

Back to basics

Contrary to what many might expect, the report reveals that workplace basics, rather than stereotypical ‘women’s issues’, play a more significant role in women’s career decisions.

Among the 15 main factors explored in this research, support from line managers, the day-to-day work itself, and team dynamics emerged as the most prominent influences on women’s choices to stay or leave an organisation.

Interestingly, these factors outweigh considerations such as caring responsibilities and menopause, which are often perceived as primary concerns for women in the workplace.

As the report says, “If you have women who are not feeling motivated by the day-to-day work they are doing, have a line manager they can’t communicate with, or a lack of flexibility around how they do their job, then having a menopause offering is not going to stop them from leaving.”

Flex appeal

While many factors influencing job satisfaction are shared across genders, flexibility does emerge as a significant point of difference between men and women.

In fact, 76% of women say the ability to work flexibly from different locations has a “huge” or “significant” impact on their decisions about whether to stay with or leave their employer.

While 65% of women say the ability to work in a flexible way from a time perspective has a “huge” or “significant” impact on their decisions.

Childcare, the availability and/or extent of special leave, and amount of work all matter more to women than men.

The report links the amount of work as ranking so highly as a critical factor in retaining women talent, as it reinforces how women continue to play a lead role when it comes to responsibility in the home, and chairing voluntary committees.

This factor flies under the radar as women possibly fear being seen as “less committed, hard-working, loyal or even professional” than those who can work overtime. If organisations are serious about understanding what holds women back from progressing into senior roles, this is one of the main issues that needs tackling.

A nuanced picture of flexibility in practice is revealed in the report.

While many companies appear more willing to accommodate location-based flexibility, many employees report encountering resistance when seeking time-based flexibility options, such as compressed hours, nine-day fortnights, or job-sharing arrangements.

It is this time-based flexibility where many organisations have room for improvement, and company-wide policies may not cut the mustard. Personalised flexibility is where it’s at, as one size truly does not fit all.

Shifting priorities

The report also provides insights into how women’s priorities shift throughout their careers.

What matters most for women in their 20s are their prospects for career progression and their salary and benefits. Flexibility, in terms of time and location, rank below the work itself, team, line manager, amount of work and culture.

For women in their 30s, line manager support remains crucial, but flexibility in work location takes the top spot, especially for those with children. Salary and benefits, the work itself, and team dynamics also play significant roles.

As women enter their 40s, line management continues to be the single biggest factor influencing decisions to stay or leave. Workplace culture regains prominence, and flexibility in all forms is viewed positively.

For women in their 50s, while menopause becomes a significant consideration, it still ranks below factors such as culture, line management, amount of work, and salary and benefits. Flexibility in both location and time takes precedence over menopause and eldercare concerns.

Once women reach their 60s, flexibility becomes less critical, with culture, work content, team dynamics, and line management taking priority.

3 software engineering roles hiring across the UK


Overall, deciding whether to stay or leave a job is not hugely seen to be a gender issue. Most of the things identified as priorities for women – culture, prospects for career progression, support from line managers and the day-to-day work itself – are also priorities for men.

This doesn’t mean that organisations should do nothing. To close gender gaps, organisations need to actively improve things that do have a bigger impact on women than men.

The common thread is time flexibility, and associated, amount of work. For forward-thinking leaders who are genuinely committed to the cause of gender parity, this is where the greatest opportunity lies.

If this doesn’t sound like the direction your organisation is heading in, it could be time to look for something new.

Ready to find a flexible role in tech? Head to the Information Age Job Board today

The post Does gender matter when it comes to staying in or leaving a job? appeared first on Information Age.

]]>
DFIR and its role in modern cybersecurity  https://www.information-age.com/dfir-and-its-role-in-modern-cybersecurity-123510868/ Wed, 26 Jun 2024 10:30:07 +0000 https://www.information-age.com/?p=123510868 By Partner Content on Information Age - Insight and Analysis for the CTO

DFIR

Here is why digital forensics and incident response (DFIR) is a crucial part of today's cybersecurity ecosystem

The post DFIR and its role in modern cybersecurity  appeared first on Information Age.

]]>
By Partner Content on Information Age - Insight and Analysis for the CTO

DFIR

As the world continues to move to the cloud, cybersecurity is becoming increasingly important to protect sensitive data and ensure the integrity and availability of systems.

Modern cybersecurity encompasses a range of practices, technologies and policies designed to safeguard networks, devices and data from cyber threats.

These dangers are constantly evolving and becoming more sophisticated, varying from malware and ransomware to advanced persistent threats (ATPs) and zero-day exploits. This has made it vital for organisations to adapt continuously and enhance their protection.

DFIR (digital forensics and incident response) is emerging here as a significant solution due to its capacity to methodically deal with and solve cyber incidents, consequently strengthening an organisation’s ability to withstand changing threats.

Continue reading this article to learn about the significance of DFIR in today’s cybersecurity ecosystem.

Understanding digital forensics

In cybersecurity, the main goal is to uncover and understand cyber incidents to help organisations respond effectively and prevent future attacks. Digital forensics involves gathering, preserving, analysing and presenting digital evidence. Its key components are:

1. Data acquisition and preservation: Gathering evidence while ensuring its integrity and safeguarding against tampering or loss

2. Analysis and interpretation: Reviewing the gathered information to recognise patterns, reconstruct events, and derive meaningful insights

3. Reporting and presentation: Documenting findings clearly and concisely for use in legal proceedings or internal investigations.

Common tools used here include EnCase and FTK (Forensic Toolkit), which help with data acquisition, analysis and reporting. Imaging (creating exact copies of digital storage) and data carving (extracting data fragments from larger datasets) are crucial techniques for uncovering threats and guiding effective incident response efforts.

The interplay between digital forensics and incident response

A robust cybersecurity plan must incorporate digital forensics and incident response (DFIR). Digital forensics aids in incidence response by offering a structured approach to collecting and examining digital evidence. For instance, investigating evidence can uncover harmful software, hacked accounts, and unauthorised entryways, which are vital for understanding and handling the situation.

In incident response, digital forensics provides detailed insights to highlight the cause and sequence of events in breaches. This data is vital for successful containment, eradication of the danger, and recovery. Conducting post-incident forensic reports can similarly enhance security by pinpointing system vulnerabilities and suggesting actions to prevent future breaches.

Incorporating digital forensics into incident response essentially allows you to examine incidents thoroughly, leading to faster recovery, enhanced security measures, and increased resilience to cyber threats. This partnership improves your ability to identify, evaluate, and address cyber threats thoroughly.

Challenges in DFIR

Several challenges are associated with DFIR:

1. Technical hurdles

These involve managing encryption and methods used by criminals to hide their actions. Effectively managing large amounts of data can also be burdensome and require significant time.

2. Legal and ethical obstructions

Challenges related to legality and ethics arise when accessing sensitive personal data for forensic investigations, raising privacy concerns. Ensuring digital evidence is legally admissible in court is difficult and requires careful documentation and handling.

3. Operational challenges

These involve effectively coordinating and communicating within organisations, particularly during an incident. Having a well-thought-out plan and being prepared for incidents are essential but are frequently insufficient, resulting in ineffective reactions to cyber threats. Dealing with these obstacles is crucial for the success of DFIR endeavors.

The future of DFIR in cybersecurity

Emerging trends and technologies are shaping the future of DFIR in cybersecurity. Artificial intelligence and machine learning are increasing the speed and effectiveness of threat detection and response. Cloud computing is revolutionising processes with its scalable options for storing and analysing data. Additionally, improved coordination with other cybersecurity sectors, such as threat intelligence and network security, leads to a more cohesive defence plan.

The training and specialisation of DFIR professionals continue to evolve as well. Key abilities involve utilising digital forensics tools, managing incident response, and keeping up with changing threats. They can consider getting certifications such as CISSP (Certified information systems security professional) and GIAC (Global Information Assurance Certification).

Endnote

Digital forensics and incident response are crucial for cybersecurity, addressing and mitigating threats effectively, making it vital for securing digital environments. Organisations must invest in DFIR capabilities and stay updated on evolving threats and technologies to ensure robust and resilient cybersecurity defences.

Read more

3 cybersecurity compliance challenges and how to address them – Earning those trust seals can strengthen relationships with board members and prospective customers, but it sure isn’t easy

The post DFIR and its role in modern cybersecurity  appeared first on Information Age.

]]>
How is AI transforming the insurtech sector? https://www.information-age.com/how-is-ai-transforming-insurtech-123510837/ Mon, 24 Jun 2024 14:34:48 +0000 https://www.information-age.com/?p=123510837 By Nick Martindale on Information Age - Insight and Analysis for the CTO

AI helping woman working in insurtech

AI fights customer frustrations and drives efficiency in insurtech, explains Nick Martindale, but it should augment, not replace, humans

The post How is AI transforming the insurtech sector? appeared first on Information Age.

]]>
By Nick Martindale on Information Age - Insight and Analysis for the CTO

AI helping woman working in insurtech

Artificial intelligence (AI) is impacting almost every industry, and insurance – and the insurtech sector on which it depends – is no exception, with applications benefiting both customers and insurance firms themselves.

From a customer service perspective, the use of chatbots is helping to answer queries in a more efficient manner, providing customers with instant answers around the clock, says Quentin Colmant, CEO of insurtech firm Qover. “AI-powered chatbots can assist customers with contract management, freeing up human agents for more complex issues,” he says. “Additionally, AI analyses vast amounts of customer data to personalise insurance recommendations. This allows insurtechs to tailor products to the specific needs of customers, ensuring they are presented with the most relevant options.”

Generative AI is tackling customer frustrations

The emergence of generative AI is likely to see this evolve further, using multiple data sources to provide even more personalised digital interaction. “General information typically provided through static and dynamic FAQs are likely to be superseded by a more interactive human-style chatbot, which was on the increase even before the advent of generative AI,” says Tony Farnfield, partner and UK practice lead at management consulting firm BearingPoint. “The ability to link an AI bot to back-end policy and claims systems will scale back the need for human intervention.”

Generative AI can also help target specific areas of frustration for customers, says Rory Yates, global strategic lead at EIS, referencing its own client esure Group. “They focused on a key customer frustration when calling a contact centre, which was repetition, so being passed from one person to the next, and needing to re-explain the reason for making contact,” he says. “Their use of generative AI helps alleviate this. Then at the end of every call, generative AI is used to summarise the notes, capturing the details of the call, making sure accurate records are kept.”

Internal efficiency is another major benefit of the effective use of AI. Steve Muylle, professor of digital strategy and business marketing at Vlerick Business School, gives the example of AI helping insurers to generate accurate quotes almost immediately. “In 2019, Direct Line launched Darwin – a motor insurance platform that uses AI to determine individual pricing through machine learning,” he says. “This approach has translated into better customer reviews and improved customer service.”

“Another example is in Asia, where insurance companies work with Uber,” he adds. “After an accident, insurers can ask nearby Uber drivers to check accidents, leveraging their knowledge of cars and their ability to take photos or videos for reporting, which can then be analysed by AI. This provides the insurers with more data, potentially from a third party, and is also a side gig for the Uber drivers.”

Another application is in the onboarding and training of employees. “AI-powered virtual assistants can guide new employees through the onboarding process, providing support and answering questions around the clock,” says Christian Brugger, partner at digital consultancy OMMAX. “Interactive AI-powered tools, such as virtual reality and augmented reality, can offer immersive training experiences, simulating real-life scenarios employees might face.”

It’s also being used to improve efficiency more generally, in the same way as it might any other business. “The ability to automate high-volume, routine, low-value-added tasks has allowed insurers to speed up their services and increase productivity,” says Steve Bramall, credit director at Allianz Trade. “This frees up valuable experts to spend more time with customers and brokers, improving customer experience.”

The risks and ethics of AI in insurtech

Yet the use of AI also brings risks and ethical considerations for insurers and insurtech firms. “With all AI, you need to understand where the AI models are from and where the data is being trained from and, importantly, whether there is an in-built bias,” says Kevin Gaut, chief technology officer at insurtech INSTANDA. “Proper due diligence on the data is the key, even with your own internal data.”

It’s essential, too, that organisations can explain any decisions that are taken, warns Muylle, and that there is at least some human oversight. “A notable issue is the black-box nature of some AI algorithms that produce results without explanation,” he warns. “To address this, it’s essential to involve humans in the decision-making loop, establish clear AI principles and involve an AI review board or third party. Companies can avoid pitfalls by being transparent with their AI use and co-operating when questioned.”

AI applications themselves also raise the potential for organisations to get caught out in cyber-attacks. “Perpetrators can use generative AI to produce highly believable yet fraudulent insurance claims,” points out Brugger. “They can also use audio synthesis and deepfakes pretending to be someone else. If produced at high-scale, such fraudulent claims can overwhelm the insurer, leading to higher payouts.”

Cyber-attacks can also lead to significant data breaches, which can have serious consequences for insurers. “These can expose confidential client information, which inevitably poses new challenges towards fostering client trust,” says James Harrison, global head of insurance at Dun & Bradstreet. “Additionally, failure to comply with data protection regulations, such as GDPR, can lead to legal consequences and financial penalties.”

Having robust cybersecurity measures is essential, particularly when it comes to sensitive or personal data, says David Dumont, a partner at law firm Hunton Andrews Kurth, and it’s important to ensure these remain able to cope with new regulations. “In the EU, the legal framework on cybersecurity is evolving and becoming more prescriptive,” he explains. “Within the next year, insurtechs may, for example, be required to comply with considerable cybersecurity obligations under the Digital Operational Resilience Act (DORA), depending on the specific type of products and services that they offer.”

AI will augment, rather than replace, human capabilities

All this means AI requires careful handling if insurers and insurtechs are to realise the benefits, without experiencing the downsides. “The future of AI in insurtech is brimming with potential,” believes Colmant. “AI will likely specialise in specific insurance processes, like underwriting or claims management, leading to significant efficiency gains and improved accuracy. This will also likely lead to even greater personalisation and automation.

“However, the focus will likely shift towards a collaborative approach, with AI augmenting human capabilities rather than replacing them entirely. Throughout this evolution, ethical considerations will remain a top priority.”

How artificial intelligence is helping to slash fraud at UK banks – Rob Woods, fraud expert at LexisNexis Risk Solutions, tells Charles Orton-Jones why behavioural data and AI are a powerful fraud-fighting combination

Why is embedded insurance so popular right now? – Charles Orton-Jones asks five industry experts how embedded insurance could transform the sector and whether or not it offers real value for consumers

Will more AI mean more cyberattacks? – An increased use of AI within organisations could spell a rise in cyberattacks, explains Nick Martindale. Here’s what you can do

The post How is AI transforming the insurtech sector? appeared first on Information Age.

]]>
How artificial intelligence is helping to slash fraud at UK banks https://www.information-age.com/artificial-intelligence-helps-slash-fraud-at-uk-banks-123510779/ Wed, 19 Jun 2024 10:57:49 +0000 https://www.information-age.com/?p=123510779 By Charles Orton-Jones on Information Age - Insight and Analysis for the CTO

Hacking concept

Rob Woods, fraud expert at LexisNexis, tells Charles Orton-Jones why behavioural data and AI are a powerful fraud-fighting combination

The post How artificial intelligence is helping to slash fraud at UK banks appeared first on Information Age.

]]>
By Charles Orton-Jones on Information Age - Insight and Analysis for the CTO

Hacking concept

Billions of data-points from thousands of companies globally are crunched to create a digital fingerprint of users and a forensic understanding of money flows. And these shared insights are the strongest tool businesses can gain to protect against tightening regulation. Rob Woods, fraud expert at LexisNexis Risk Solutions, explains.

Catching fraudsters is getting harder. They are more sophisticated, armed with tools such as Gen AI, and lavishly financed by the spoils of their successful hits. UK Finance estimates last year alone £1.17bn was stolen via fraud. The founder of the Evil Corp hacker gang (yes, that is the name) in Russia drives a custom-made Lamborghini.

With the Payment Systems Regulator’s compulsory reimbursement rules looming over banks and PSPs, there’s a renewed incentive to get a firm grip on fraudulent payments entering and leaving their virtual walls.

The fight back begins with shared data combined with powerful machine learning. By examining a vast array of indicators covering customers’ every interaction with their device, apps and web browser, banks are able to predict fraud with a high level of accuracy, before it happens.

A key ingredient is behavioural analytics. Real-time user behaviour is tracked and compared against past behaviours considered normal for that individual. Crucially, these behaviours are largely unique to the individual, almost impossible to fake.

1. Phone Movement

Is the phone being held in a landscape or portrait position? What is the rotation or angle of the phone? Do the sensors match the situation?

2. Touchscreen Behaviour

How is the touchscreen used? How much pressure is being applied? What is the swipe speed and motion?

3. Keyboard Behaviour

How is a keyboard used? What is the typing cadence? Were any special keys pressed? Were any keyboard shortcuts taken?

A bank or e-commerce provider can monitor these factors and look for deviations from normal. Maybe the user types in a different way or clicks the mouse in a manner suggesting a bot is behind the movements.

But these indicators are just the start. Adding dozens more user interactions builds a digital fingerprint for each customer. What device are they using? If it’s the Apple iPhone they’ve used for five years the bank can trust it. But what if the user suddenly switches to a cheap Android? This could be a red flag, triggering an additional authentication via a code sent by text message. Are they using a Virtual Private Network (VPN)? Are they using private browsing, or wiping cookies? What time of day are they logging on? Are they cut and pasting into fields like forename that most users would type?

Every click, every device, and every transaction is logged and incorporated into the risk model. Banks can then discover patterns for suspicious behaviours which would be undetectable to the human eye.

Each time a fraud is perpetrated, the model gets another boost. Retrospective analytics can be run to see what the giveaways were.

Looking backwards helps banks to look forwards. Understanding how dirty money flows through the banking system and overlaying this with these other fraud indicators then adds another layer of protection.

For example, mule accounts are usually tested by fraudsters in advance of laundering money to ensure transactions are able to pass through. They’ll deposit a small amount, £1, for example. Then transactions will be sent and received of equal value. The behavioural analytics engine can spot this and flag the risk to the bank.

The model is nuanced. Users are given a trust score which moves up and down depending on the full gamut of factors.

The result? An automated fraud detection system, which lets valid users open accounts, login, make payments, and take out loans with unobtrusive security, but shines a spotlight on criminals. Every good customer’s footprint, as defined within ThreatMetrix, is a unique representation of their actions in the digital realm. In 2023 fraud in the UK dipped, as detection methods outpaced the thieves.

Our approach

There are many anti-fraud engines based on device intelligence and collaborative fraud modelling, but at LexisNexis Risk Solutions, we have the industry-leading solution.

ThreatMetrix is used by nine out of the ten largest UK banks, by the top 20 S&P 500 companies, and thousands of organisations in 200 countries. It is the most sophisticated approach in the world, by far. It’s part of the reason why we’re consistently named as a leader in fraud prevention by global analyst firms.

A major factor is our scale. Thousands of customers feeding data into ThreatMetrix globally means many billions of data points, crowdsourced and shared across borders and across industries.

The vessel for this crowdsourced data is the Digital Identity Network. Another world-first innovation, it shares information from thousands of participants across banking, gaming, retail and other industries. We create digital identities for consumers based on their online fingerprint – devices, habits, transaction patterns and so on. Collectively we crowdsource information on valid consumers and fraudsters, to distinguish between the two. Last year we logged 92 billion transactions for 4 billion email addresses. When a user arrives at a website run by a member of the Digital Identity Network their reputation – for better or worse – precedes them.

Does it work? Our case studies show just the impact our approach has on fraud.

Metro Bank wanted to combat mule accounts ahead of the forthcoming PSR reimbursement rules. LexisNexis Risk Solutions ThreatMetrix analysed all Metro Bank’s consumer behaviour and transactions data. We quickly identified the hallmarks of a mule account.

In six months at Metro Bank, our behavioural analytics platform identified £2.5 million of mule account payments, an uplift of 105 per cent. An additional one in eight of the accounts flagged as possible mules were investigated and confirmed. This initiative helped Metro Bank reduce first-party fraud by 44 per cent, with detection up 71 per cent.

The mission

The focus for anti-fraud technology is, as ever, to create a superior customer experience.

Behavioural analytics and real time transactional intelligence means consumers can be evaluated in a live environment. Genuine customers can be left to shop and bank without obstacles. A moving scorecard means less reliable actors can be flagged, given higher security requirements, or have access suspended, depending on their rating.

Complying with PDS2 is made easier. Banks are required to impose Strong Customer Authentication, composed of two factor authentication (2FA). With our approach the device itself can be reliably nominated as one of the two factors – a convenient time saver for genuine users.

Behavioural analytics is now mandatory for banks and other organisations who wish to combat fraud and improve the customer experience. Fortunately, installing a system such as ThreatMetrix is straightforward. A cloud system can be connected via an API to ingest any data within an organisation. Improvements in analytics are introduced regularly, with no input from the bank required.

It’s important to stress, human intelligence can be included in the mix too. Our data scientists work with banks to act on their feedback, fine-tuning the models to adapt to new fraud concepts. Thresholds for asking customers for additional authentication, for example, can be set by banks’ policy teams. Implementation is personalised for every organisation.

Fraud is an arms race. The perpetrators are constantly innovating and launching attacks on ever large scales. Behavioural analytics turns the tables. The more data the banks accumulate, the more accurate their detection.

Behavioural analytics is above all a customer-centric approach to crime, liberating genuine users from intrusive checks, and quarantining villains before they’ve even got started. It marks a turn in the tide in the war on fraud.

To find out more visit: LexisNexis® Risk Solutions ThreatMetrix®

The post How artificial intelligence is helping to slash fraud at UK banks appeared first on Information Age.

]]>
Will more AI mean more cyberattacks? https://www.information-age.com/will-more-ai-mean-more-cyberattacks-123510778/ Tue, 18 Jun 2024 15:10:45 +0000 https://www.information-age.com/?p=123510778 By Nick Martindale on Information Age - Insight and Analysis for the CTO

Cyberattacks within an organisation

An increased use of AI within organisations could spell a rise in cyberattacks, explains Nick Martindale. Here's what you can do

The post Will more AI mean more cyberattacks? appeared first on Information Age.

]]>
By Nick Martindale on Information Age - Insight and Analysis for the CTO

Cyberattacks within an organisation

The growth in the use of artificial intelligence (AI) is impacting businesses in many ways, but one of the most dangerous could be as a result of exposing them to cyber threats. According to Gigamon’s 2024 Hybrid Cloud Security Survey, released in June 2024, 82 per cent of security and IT leaders around the world believe the global ransomware threat will grow as AI becomes more commonly used in cyberattacks.

AI making cyberattacks more sophisticated

One of the biggest risks comes from the use of AI to create much more convincing phishing and social engineering attacks. “Cybercriminals can use tools like ChatGPT to craft highly convincing emails and messages,” says Dan Shiebler, head of machine learning at Abnormal Security. “It’s now easier than ever for a threat actor to create perfectly written and even personalised email attacks, making them more likely to deceive recipients.”

AI is also creating entirely new ways to impersonate people. Four in 10 security leaders say they have seen an increase in deepfake-related attacks over the last 12 months, the Gigamon survey finds. “Deepfake technology holds real potential to manipulate employees into sharing personal details or even sending money through false video calls, recordings and phone calls,” says Mark Jow, EMEA technical evangelist at Gigamon.

In February 2024, a finance worker for engineering firm Arup was tricked into making a payment of $25.6 million after scammers impersonated the company’s chief financial officer (CFO) and several other staff members on a group live video chat. “The victim originally received a message purportedly from the UK-based CFO asking for the funds to be transferred,” says Chris Hawkins, security consultant at Prism Infosec.

“The request seemed out of the ordinary, so the worker went on a video call to clarify whether it was a legitimate request. Unknown to them, they were the only real person on the call. Everyone else was a real-time deepfake. The most difficult deepfakes to spot are audio followed by photos and then video, and for this reason it’s vishing attacks that are the main cause for concern in the industry at the present time.”

But AI is also being deployed by cybercriminals to identify opportunities and vulnerabilities to carry out distributed denial-of-service (DDoS) attacks. “It is being used both to better profile a target for selection of the initial attack vectors to be used, to ensure that they will have the highest impact, and to ‘tune’ an ongoing attack to overcome defences as they react,” says Darren Anstee, chief technology officer for security at NETSCOUT. “These capabilities mean that attacks can have a higher initial impact, with little or no warning, and can also change frequently to circumvent static defences.”

Mind your business – and its use of AI

Organisations are also potentially exposing themselves to cyber threats through their own use of AI. According to research by law firm Hogan Lovells, 56 per cent of compliance leaders and C-suite executives believe misuse of generative AI within their organisation is a top technology-associated risk that could impact their organisation over the next few years. Despite this, over three-quarters (78 per cent) of leaders say their organisation allows employees to use generative AI in their daily work.

One of the biggest threats here is so-called ‘shadow AI’, where criminals or other actors make use of, or manipulate, AI-based programmes to cause harm. “One of the key risks lies in the potential for adversaries to manipulate the underlying code and data used to develop these AI systems, leading to the production of incorrect, biased or even offensive outcomes,” says Isa Goksu, UK and Ireland chief technology officer at Globant. “A prime example of this is the danger of prompt injection attacks. Adversaries can carefully craft input prompts designed to bypass the model’s intended functionality and trigger the generation of harmful or undesirable content.”

Jow believes organisations need to wake up to the risk of such activities. “These services are often free, which appeals to employees using AI applications off the record, but they generally carry a higher level of security risk and are largely unregulated,” he says. “CISOs must ensure that their AI deployments are secure and that no proprietary, confidential or private information is being provided to any insecure AI solutions.

“But it is also critical to challenge the security of these tools at the code level,” he adds. “Is the AI solution provided by a trusted and reputable provider? Any solutions should be from a trusted nation state, or a corporation with a good history of data protection, privacy and compliance.” A clear AI usage policy is needed, he adds.

What can I do to reduce the threat?

There are other steps organisations can take to reduce the risk of being negatively impacted by AI-related cyber threats, although currently 40 per cent of chief information security officers have not yet altered their priorities as a result, according to research by ClubCISO.

Educating employees on the evolving threat is vital, says Hawkins, but he points out that in the Arup attack the person in question had raised concerns. “Employee vigilance is only one piece of the puzzle and should be used in conjunction with a resilient data recovery plan and thorough Defence in Depth, with large money transfers requiring the sign-off of several senior members of staff,” he says.

Ev Kontsevoy, CEO of cybersecurity startup Teleport, believes organisations need to overhaul their approach around both credentials and privileges. “By securing identities cryptographically based on physical world attributes that cannot be stolen, like biometric authentication, and enforcing access based on ephemeral privileges that are granted only for the period of time that work needs to be completed, companies can materially reduce the attack surface that threat actors are targeting with these strategies,” he suggests.

The bottom line is that organisations will need to draw on a variety of techniques to ensure they can keep up with the new threats that are emerging because of AI. “In the coming years, cybercriminals are expected to increasingly exploit AI, automating and scaling attacks with sophisticated, undetectable malware and AI-powered reconnaissance tools,” points out Goksu. “This could flood platforms with AI-generated content, deepfakes and misinformation, amplifying social engineering risks.

“Firms not keeping pace risk vulnerabilities in critical AI systems, potentially leading to costly failures, legal issues and reputational harm. Failure to invest in training, security and AI defences may expose them to devastating attacks and eroded customer trust.”

Read more

The importance of disaster recovery and backup in your cybersecurity strategy – A strong disaster recovery as-a-service (DRaaS) solution can prove the difference between success and failure when it comes to keeping data protected

Can NIS2 and DORA improve firms’ cybersecurity? Daniel Lattimer, Area VP at Semperis, explores NIS2 and DORA to see how they compare to more prescriptive compliance models

The changing role of the CISO – The cybersecurity head of any organisation has moved from being purely tech and reactive to someone forward-thinking and strategic. Lamont Orange looks at how to navigate the changing role of the CISO

The post Will more AI mean more cyberattacks? appeared first on Information Age.

]]>
Skills gap – 28% of the UK workforce is underqualified https://www.information-age.com/skills-gap-28-of-the-uk-workforce-is-underqualified-123510722/ Wed, 12 Jun 2024 09:00:00 +0000 https://www.information-age.com/?p=123510722 By Aoibhinn McBride on Information Age - Insight and Analysis for the CTO

skills gap concept

Jobbio's Aoibhinn McBride explains why the skills gap is the real problem in the technology hiring landscape

The post Skills gap – 28% of the UK workforce is underqualified appeared first on Information Age.

]]>
By Aoibhinn McBride on Information Age - Insight and Analysis for the CTO

skills gap concept

Forget economic downturns or mass restructuring. The real issue at the crux of the tech hiring landscape is the skills gap, something that is becoming increasingly pronounced in the UK.

According to different reports commissioned by the UK government, including the Levelling Up the United Kingdom white paper and the Department of Education’s Skills for Jobs policy paper, the skills gap is one of the main factors getting in the way of economic prosperity across all sectors, including tech.

It’s estimated that if UK businesses cannot adequately upskill current staff or secure new hires with sufficient technical skills, it could cost the UK economy more than £240 billion by 2026.

3 tech roles hiring across the UK

Moving forward

So, what can be done to address this issue?

The OECD Skills for Jobs report has identified that 28 per cent of the UK workforce is already underqualified.

When comparing the technical skills of UK workers with those of their U.S. counterparts, the report found that while the U.S. has an adequate number of workers with digital skills including computer programming, the UK is struggling to find the right candidates to fill technical roles.

Separate data compiled by AND Digital has revealed that 58 per cent of workers have never received digital skills upskilling from their employer, highlighting that organisations need to do more when it comes to learning and development.

“For businesses to be fit for a digital present and future, they need individuals and teams with the skills to envision, design, test and iterate product ideas fast. This means mastering product management and new delivery skills, building prototypes at pace while scaling products quickly,” says Stephen Paterson, executive for consulting at AND Digital.

“What’s more, organisations need to make greater use of data to ensure products and services succeed in their market, delighting customers at every touchpoint. In order to do this individuals and teams need to develop and acquire the skills to develop an ‘engineering’ and ‘data’ mindset to understand users and their challenges, building exceptional digital experiences throughout the customer journey.

“Though none of these skills are strictly technical in nature, they are key to our digital future and necessary.”

3 tech roles hiring right now

Softly does it

It’s worth noting that the skills gap doesn’t just relate to hard or technical skills. Soft skills are becoming increasingly important within the workplace, especially as AI tools are adopted and soft or cognitive skills (AKA the skills machines are yet to master) are in demand.

According to the World Economic Forum’s Future of Jobs report strategic/critical thinking, and self-efficacy skills (resilience, flexibility and agility), are paramount.

This correlates with research conducted by IBM which identified that soft skills should be revised and reinvigorated every seven and a half years to stay relevant within the workplace.

And while upskilling or learning new digital skills from scratch will require a more formal approach, the good news is that you can start working on your soft skills in your everyday job.

Do you have the capacity to work on a project with someone on your team or in another department to develop your teamwork skills? Or perhaps you’ve noticed a trend emerging from data and can adapt this into a business strategy: critical and creative thinking boxes ticked.

Brushing up on your soft skills can even be as simple as leading a team meeting if you usually shy away from presenting, or speaking in front of a large group of people.

Ready to put your skills to the test and find your next role in tech? Head to the Information Age Job Board today

Read more

Essential skills for becoming a CTO – Essential skills for becoming a CTO go beyond being a JavaScript virtuoso or a Scrum Master. Soft skills such as being an encouraging manager and explaining solutions to the wider business are equally important. CTO coach Andy Skipper share his tips on how to take your career to the next level

The post Skills gap – 28% of the UK workforce is underqualified appeared first on Information Age.

]]>