Jio Platforms Teams with NVIDIA to Bring State-of-the-Art AI Cloud Infrastructure to India

Jio Platforms Limited today announced plans to build a state-of-the-art cloud-based AI compute infrastructure to accelerate India’s position as a growing force in artificial intelligence, in collaboration with NVIDIA.

The new AI cloud infrastructure will enable researchers, developers, startups, scientists, AI practitioners and others across India to access accelerated computing and high-speed, secure cloud networking to run workloads safely and with extreme energy efficiency.

The new infrastructure will greatly speed up a wide range of India’s key initiatives and AI projects, including AI chatbots, drug discovery, climate research and more.

As part of the collaboration, NVIDIA will provide Jio with end-to-end AI supercomputer technologies including CPU, GPU, networking, and AI operating systems and frameworks for building the most advanced AI models. Jio will manage and maintain the AI cloud infrastructure and oversee customer engagement and access.

Mukesh Ambani, Chairman & Managing Director, Reliance Industries Limited, said on the partnership, “As India advances from a country of data proliferation to creating technology infrastructure for widespread and accelerated growth, computing and technology super centres like the one we envisage with NVIDIA will provide the catalytic growth just like Jio did to our nation’s digital march. I am delighted with the partnership with NVIDIA and looking forward to a purposeful journey together.”

Akash Ambani, Chairman of Reliance Jio Infocomm Limited, said, “At Jio, we are committed to fuelling India’s technological renaissance by democratizing access to cutting-edge technologies. Our collaboration with NVIDIA is a significant step in this direction. Together, we will develop an advanced AI cloud infrastructure that is secure, sustainable, and are deeply relevant to India’s unique opportunities. This state-of-the-art platform will be a catalyst in accelerating AI-driven innovations across sectors, from healthcare and education to enterprise solutions. Our vision is to make AI accessible to researchers, start-ups, and enterprises across the nation, thereby accelerating India’s journey towards becoming an AI powerhouse.”

“We are delighted to partner with Reliance to build state-of-the-art AI supercomputers in India,” said Jensen Huang, founder and CEO of NVIDIA. “India has scale, data and talent. With the most advanced AI computing infrastructure, Reliance can build its own large language models that power generative AI applications made in India, for the people of India.”

The Top Large Language Models (LLMs) Revolutionizing Technology in 2023

Language is a powerful tool that enables us to communicate and understand the world around us. With advancements in technology, large language models (LLMs) have emerged as game-changers, revolutionizing various fields. In this article, we will delve into the top LLMs that are shaping the way we interact with technology.

  1. OpenAI’s GPT-3.5:

OpenAI’s GPT-3.5, one of the most remarkable LLMs, has captivated the tech world. It stands as a testament to the impressive strides made in natural language processing (NLP). GPT-3.5 boasts an astounding 175 billion parameters, making it capable of generating highly coherent and contextually relevant responses.

  1. Google’s Switch Transformer:

Google’s Switch Transformer is another groundbreaking LLM that has garnered significant attention. Designed to handle vast amounts of data efficiently, it stands out for its versatility and scalability. Switch Transformer excels in a wide range of tasks, including translation, image captioning, and even playing video games.

  1. Microsoft’s Turing NLG:

Microsoft’s Turing Natural Language Generation (NLG) model showcases the company’s dedication to pushing the boundaries of AI. With 17 billion parameters, Turing NLG offers impressive text generation capabilities. It demonstrates exceptional performance in various language tasks, such as summarization, translation, and sentiment analysis.

  1. Hugging Face’s Megatron:

Megatron, developed by Hugging Face, has made waves in the NLP community. This LLM shines when it comes to parallel training on multiple GPUs. Megatron’s architecture allows for efficient distribution of computation across these GPUs, resulting in faster training times and enhanced performance.

  1. Salesforce’s CTRL:

CTRL, developed by Salesforce, focuses on generating coherent and controlled language. This LLM enables users to prompt it with specific instructions, allowing for more targeted and precise responses. CTRL’s fine-tuning capabilities make it a valuable asset for content generation and human-like conversational experiences.

  1. EleutherAI’s GPT-Neo:

EleutherAI’s GPT-Neo is an open-source LLM that emphasizes accessibility and democratization of AI technology. It offers models with varying sizes, ranging from small to extra-large, accommodating diverse use cases and computational resources. GPT-Neo is built on the GPT-3 architecture, making it a versatile and resourceful tool.

  1. NVIDIA’s Megatron-Turing:

NVIDIA’s Megatron-Turing is a fusion of Hugging Face’s Megatron and Microsoft’s Turing NLG. This collaboration brings together the strengths of both models, offering enhanced performance and capabilities. Megatron-Turing excels in various NLP tasks, including language generation, summarization, and understanding.

  1. Google’s T5:

Google’s Text-To-Text Transfer Transformer (T5) is a highly flexible and versatile LLM. With 11 billion parameters, T5 is capable of performing a wide range of language-related tasks, such as translation, summarization, question answering, and more. Its adaptability makes it a powerful tool for developers and researchers.

The world of LLMs is rapidly evolving, empowering us with advanced AI capabilities. As these models continue to develop and refine, they hold the potential to reshape how we interact with technology, enabling more natural and human-like experiences.

ChatGPT : Breaking down the language barrier!

ChatGPT is a language model developed by OpenAI, one of the world’s leading artificial intelligence research organizations. As a language model, ChatGPT uses machine learning algorithms to understand and generate natural language responses to a wide range of questions and prompts.

The technology behind ChatGPT is based on a deep learning model known as a transformer, which is trained on vast amounts of data to learn the patterns and structures of human language. This allows ChatGPT to generate natural and coherent responses to a wide range of prompts, from simple questions to complex conversations.

To use ChatGPT, users can simply input a question or prompt into the system, and ChatGPT will generate a response based on its training and understanding of natural language. The system is designed to learn and improve over time, as it is fed more data and experiences a wider range of prompts and questions.

Potential Use Cases

ChatGPT is used in a variety of applications, from customer service chatbots to virtual assistants and language translation systems. Its ability to understand and generate natural language responses has made it a valuable tool for businesses and organizations looking to improve their communication with customers and users.

  • Customer Service: ChatGPT can be used to provide customer service through chatbots, which can help businesses automate routine interactions with customers and improve response times.
  • Education: ChatGPT can be used to create personalized learning experiences for students, providing feedback and guidance on assignments and coursework.
  • Healthcare: ChatGPT can be used to provide virtual healthcare services, such as symptom triage or mental health counseling.
  • Content Creation: ChatGPT can be used to generate content for websites, social media, and other digital channels, such as news articles or product descriptions.
  • Translation: ChatGPT can be used for language translation, allowing users to communicate with people who speak different languages.
  • Research: ChatGPT can be used for research purposes, such as language modeling and information retrieval.
  • Entertainment: ChatGPT can be used to create interactive games and storytelling experiences, allowing users to engage with virtual characters and narratives.

Limitations

While ChatGPT is an impressive achievement in natural language processing and machine learning, there are still some limitations to the technology that are important to consider. Here are some of the key limitations of ChatGPT:

  • Contextual Understanding: ChatGPT is trained on a vast amount of data, but it still has limitations in terms of its ability to understand the context of a conversation or prompt. It can struggle with understanding nuances or sarcasm, which can lead to incorrect or inappropriate responses.
  • Biases: Like any machine learning model, ChatGPT can also be prone to biases in the data it is trained on. This can lead to responses that may unintentionally reinforce harmful stereotypes or prejudices.
  • Generating Original Content: While ChatGPT is capable of generating text, it’s important to remember that it’s doing so based on patterns and structures it has learned from existing data. It may not always be capable of generating truly original content that is entirely unique or creative.
  • Limited World Knowledge: While ChatGPT can learn from a large amount of data, it may not have the same breadth of knowledge and experience as a human. It may struggle with tasks that require a deep understanding of the world and the ability to reason and make connections between different ideas.

While ChatGPT is a powerful tool, it’s important to understand its limitations and use it appropriately. As with any technology, it’s not a perfect replacement for human interaction or decision-making.

Health Care: Covid-19 Accelerates AI Use

Courtesy: Getty Images

From predicting outbreaks to devising treatments, doctors are turning to AI in an effort to combat the COVID-19 pandemic.

While machine learning algorithms were already becoming a part of health care, COVID-19 is likely to accelerate their adoption. But lack of data and testing time could hinder their effectiveness — for this pandemic, at least.

With millions of cases and outbreaks in every corner of the world, speed is of the essence when it comes to diagnosing and treating COVID-19. So it’s no surprise doctors were quick to employ AI tools in an effort to get ahead of what could be the worst pandemic in a century.

  • HealthMap, a web service run by Boston Children’s Hospital that uses AI to scan social media and other reports for signals of disease outbreaks, spotted some of the first signs of what would become the COVID-19 outbreak. This was days before the WHO formally alerted the rest of the world.
  • Early in the outbreak the Chinese tech company Alibaba released an AI algorithm that uses CT scans of possible coronavirus patients and can diagnose cases automatically in a matter of seconds.
  • In New York, Mount Sinai Health System and NYU Langone Health have developed AI algorithms that can predict whether a COVID-19 patient is likely to suffer adverse events in the near future and determine when patients will be ready to be discharged. Such systems can help overburdened hospitals better manage the flow of supplies and personnel during a medical crisis.

Even before COVID-19, AI was already becoming a bigger part of modern health care. Nearly $2 billion was invested in companies involved in health care AI in 2019, and in the first quarter of 2020, investments hit $635 million — more than four times the amount seen in the same period of 2019, according to digital health technology funder Rock Health.

  • The advance of AI is partially a result of the rapid increase in data, the lifeblood of any machine learning system. The amount of medical data in the world is estimated to double every two months.
  • Engineer and entrepreneur Peter Diamandis told Wired an estimated 200 million physicians, scientists and technologists are now working on COVID-19, generating and sharing data “with a transparency and at speeds we’ve never seen before.”
  • “We understand who is at risk and how they’re at risk, and then we can get the right treatment to them,” says Zeeshan Syed, the CEO of Health[at]Scale, an AI health care startup.

In trials, at least, AI has demonstrated a decent record of success, especially when it comes to rapidly diagnosing COVID-19 by interpreting medical scans.

  • A study published in Nature Medicine this month found an AI system was more accurate than a radiologist in diagnosing COVID-19 patients using CT scans — X-ray images of lungs — combined with clinical symptoms.
  • A systematic review of preprint and published studies of AI diagnostic systems for COVID-19 published in the British Medical Journal in April noted the models reported “good to excellent predictive performance,” but cautioned the data was still limited for real-world applications and at high risk for bias.

That’s the perennial challenge for AI systems in any field. Experts worry models that perform well in an experiment may not be able to replicate that success in a hospital under stress.

  • “There is a lot of promise in using algorithms, but the data in the biomedical space can be really difficult to deal with,” says Gabe Musso, the chief science officer at BioSymetrics, a biomedical AI company that uses machine learning for simulation-based drug discovery. Genetic data, imaging data and data from electronic health records are often unstructured and rarely share a common format, complicating efforts to feed the information into an algorithm.
  • Many of the AI diagnostic systems being rushed into the fight against COVID-19 were developed before the pandemic and thus were trained on other respiratory diseases like tuberculosis. That reduces their accuracy — especially if their training datasets don’t match the gender or age of typical COVID-19 patients.
  • As a result, pioneering computer scientist Kai-fu Lee wrote recently, “I would give [AI] a B-minus at best” for its performance during the pandemic.

 As both the size and quality of medical data on COVID-19 improves, so should the AI systems that draw from it. But that will take time.

  • “AI will not be as useful for COVID as it is for the next pandemic,” Rozita Dara, a computer scientist at the University of Guelph, told Science recently. (Source: Axios)