BioStrand - Accelerating towards precision
Precision in the statistical language refers to the degree of refinement with which an operation is performed, or a measurement is stated. Can we personalise our lifestyles in terms of daily nutrition and diagnosis as well as medical treatments etc.? This is certainly what Precision Technology is aiming for. For instance, the idea behind precision healthcare is to integrate the clinical and pathological indexes with state-of-art technologies to create diagnostic, prognostic, and therapeutic strategies which could be tailored at an individual/personalised level for improved health care.
Every person has a unique genetic map and so, a deeper understanding of it will help us to come up with more effective and personalised treatment plans. With recent advancements in technology, there is a wealth of data which when interpreted and analysed correctly, has the potential to drive real change in clinical practice, from personalised therapy and intelligent drug design to population screening and electronic health record mining. This could form the basis for “Precision Healthcare”. From the completion of the human genome project, we are equipped with more data on the entire genome and variants that are related to human health. This increase in data contributed to the increasing number of genetic tests available (> 70,000) by leveraging data science and next-generation sequencing technology. As data sets get larger at an exponential rate, there are no limits to the number of biological insights that can be found. But at the same time, we need to find novel and improved approaches for analysing these data sets for conclusive insights.
The concept of precision is not limited just to healthcare but is present in almost every aspect of our life. Another example is "Precision Agriculture", where site-specific crop management is carried out with the aid of remote sensing technologies measuring the field variability. This approach would help to understand the optimum soil requirements for the crops to ensure optimal productivity and sustainability. The first wave of this precision agriculture/agronomics uses various elements such as GPS soil sampling, computer-based fertilizer & herbicides applications, remote sensing technologies to monitor the requirement of land, water, and other resources. Innovation in precision agriculture continues to grow with more and more farms adopting this technology for feeding the growing population.
Most breakthroughs that are happening in the field of medicine and agriculture are fueled by genomic sciences and yet only the surface has been scratched. The potential seems boundless. Data handling and processing capacity continues to scale beyond what we have at present and since it’s an emerging field, the information technology support might evolve to include proper decentralised storage systems, a collaborative platform for accessing the open data, securities, and compliances protection.
The state of genomic exploration and application: prioritising Precision Medicine
Researchers across the globe, estimate that as many as 2 billion human genomes could be sequenced by 2025. Pharmaceutical companies have doubled their R&D investments specifically on personalised medicine for the past 5 years and an additional influx of funds (approx. 33%) is expected after this pandemic. Despite the advancement in genomic testing, sequencing technologies, and an avalanche of data, significant challenges remain in prioritising precision medicine. Challenges might vary from data de-centralisation, data security to efficiency issues. A multi-institutional collaboration and transparent data-sharing need to be encouraged aiming for a better understanding of the application of precision medicine in developed and developing countries. Certain key factors are shaping today's genomics marketplace.
Financing
Due to technological advancements, the price of next-generation sequencing has fallen to the point where various research applications have become dramatically more accessible. For example, the cost of analysing the human genome has dropped from $2.7 billion in 2003 to $1500 in 2020. Additionally, there is an increase of easily accessible genetic tests on the market to analyse partial genomic regions for less than $100. As we progress, more and more people would get their whole genome sequenced. This would potentially increase the amount of data that is generated, opening up the challenges to handle it.
Data de-centralisation
Researchers need to collaborate while ensuring compliance with applicable security regulations. Collaboration is key for scientific discovery. There is still a struggle between scientists to share insights without proper modernised infrastructure. Cloud computing is one popular solution for data decentralisation and easy access. Cloud computing provides a secure way to share the data across different collaborators without hampering the breakthroughs.
Neophobia
In-house pipelines dedicated to unique needs are quite common in different organisations, under the presumption that third-party infrastructure will not provide the flexibility to analyse the data as specifically desired. The transition to a collaborative platform might be daunting and neophobic. However, the advantages far outweigh the cons. In-house pipelines require huge investments and considerable resources to be maintained and organisations lose focus on the discovery and innovation part.
Security
Complexities around managing privacy, security, and compliance regulations globally and regionally are a critical component of ensuring researchers do not put their organisations at risk. Therefore, regulated policies need to be established, to secure sensitive information from being leaked to a third party.
Evaluating your current informatics solutions
Given the current trends in genomics and the growing amount of data, it is important to evaluate your current informatics infrastructure. Certain questions need to be asked when evaluating your current informatics solutions.
Can the current informatics infrastructure handle the scale of data presently and in the next 5-10 years depending on the projects planned?
An increase in the volume of genomics data in various discovery pipelines and the resulting complexities will continue to increase as we progress. We need to be ready with the proper infrastructure to handle such enormous data and process it. A rough estimate of 40 exabytes of storage capacity will be needed to handle the increase in genome sequencing. Foreseeing the challenges of tomorrow, we need to prepare the infrastructure to handle and assess the data easily.
Is the infrastructure that is built safe and secured?
The in-house infrastructure and platforms need to be maintained to ensure compliance with privacy, security, and other regulations. These are the critical dimensions involved in examining the genomics platforms and are a burden on the researchers to handle. In-house systems must be self-reliant in their security and compliance expertise.
Is it easy to collaborate with other organisations without compromising on privacy and security?
To accelerate innovation, the scientific collaboration of interdisciplinary teams is a requirement. Collaboration could be internal or external spread out across the globe. And so potential risks concerning privacy and security arise. The system needs to be established with global compliances and easy access to all the stakeholders while protecting sensitive data.
Finding answers in the scale, efficiency, and security of the cloud
Scientists in academics and industry should ask themselves these three questions while evaluating their do-it-yourself (DIY) pipelines to ensure effectiveness, efficiency, and also at the same time having a secure way of sharing the data with the collaborators without risking a leak of sensitive information. If we are focusing on the advancement of precision health at greater speed, why insist on spending time, resources, and expertise on building infrastructure that already exists and eliminates all the heavy lifting and risk factors instead of spending all the resources and expertise on gaining valuable scientific insights?
The advantages of cloud-based next-generation sequence analysis
Cloud computing was invented primarily for technology and business users and was not intended for genomics and other sciences specifically. With the growing list of genomics and clinical-based companies, cloud computing provides an elasticity to scale computing resources based on the amount of data being analysed and other constraints involved with the local cluster. Adopting cloud computation will allow researchers to handle the complexities associated with scaling the infrastructure, will provide flexibility with the pre-configured pipelines, and assist in controlled & reproducible workflows.
How BioStrand could help you accelerate your work towards precision?
BioStrand could potentially help you achieve the global scale efficiency and competitive advantages of global data eliminating the risk factors, security concerns, etc. With BioStrand, you could leverage the usage of end-to-end bioinformatics solutions and cloud-based integrations and operate in a secure, collaborative environment with high confidentiality.
BioStrand will enable the researchers to push towards the scientific breakthroughs eliminating the unnecessary complex in-house infrastructure and maintenance. BioStrand promises a future-proof investment in integrated genomics solutions and technological innovation.
Join us in accelerating scientific collaboration and discover how to tackle the world’s most exciting opportunities in human health and other sectors with our platforms.
References:
- Big Data: Astronomical or Genomical? 2015. PLOS Bio.
- Personalized Medicine Is Gaining Traction but Faces Multiple Challenges. 2015. Tufts Center for the Study of Drug Development.
- People want more compensation, security for genomic data. 2020. Cornell Chronicle
Subscribe to our Blog and get new articles right after publication into your inbox.