
Semantic Kernel: empower your LLM apps
Semantic Kernel (SK) is an SDK that integrates AI Large Language Models (LLMs) with conventional programming languages.
Transform is an annual technology conference for software developers, data scientists, and ML engineers about applied artificial intelligence and digital transformation.
When: from NOVEMBER 28 to DECEMBER 6, 2022
Where: online
Website: transform.devrain.com
Cost: free, but you can donate to DonorUA, a primary recruiting and coordination platform for blood donors in Ukraine, which provides support to Ukrainians during the war
Organizer: DevRain, a software development company, Microsoft Partner
The conference’s partners are Microsoft and NVIDIA.
Transform 2022 will consist of workshops (Nov 28 — Dec 2) and a full-day conference (Dec 6).
The following topics will be covered:
Chatbots and personal assistants
Language and speech services
Document processing
MLOps
Azure OpenAI Service
GitHub Copilot
AI certifications
Deep Learning
If you want to attend workshops and a full-day conference, you need to register here.
If you want to attend NVIDIA workshops, you’ll need to register separately.
Businesses worldwide are using artificial intelligence to solve their greatest challenges. Healthcare professionals use AI to enable more accurate, faster diagnoses of patients. Retail businesses use it to offer personalized customer shopping experiences. Automakers use it to make personal vehicles, shared mobility, and delivery services safer and more efficient. Deep learning is a powerful AI approach that uses multi-layered artificial neural networks to deliver state-of-the-art accuracy in tasks such as object detection, speech recognition, and language translation.
Using deep learning, computers can learn and recognize patterns from data that are considered too complex or subtle for expert-written software.
In this workshop, you’ll learn how deep learning works through hands-on exercises in computer vision and natural language processing. You’ll train deep learning models from scratch, learning tools and tricks to achieve highly accurate results. You’ll also learn to leverage freely available, state-of-the-art pre-trained models to save time and get your deep learning application up and running quickly.
Register: Fundamentals of Deep Learning (nvidia.com)
Learn how to use GPUs to deploy machine learning models to production scale with the Triton Inference Server. At scale, machine learning models can interact with up to millions of users in a day. As usage grows, the cost of both money and engineering time can prevent models from reaching their full potential. It’s these types of challenges that inspired the creation of Machine Learning Operations (MLOps). Practice Machine Learning Operations by: Deploying neural networks from a variety of frameworks onto a live Triton Server Measuring GPU usage and other metrics with Prometheus Sending asynchronous requests to maximize throughput Upon completion, learners will be able to deploy their own machine learning models on a GPU server.
Prerequisites: familiarity with at least one Machine Learning framework such as PyTorch/TensorFlow, ONNX, and TensorRT. Familiarity with Docker is recommended, but not required.
Register: Accelerating and Scaling Inference with NVIDIA GPUs (splashthat.com)
Additional references:
DevRain Upskilling Initiative: Website (in Ukrainian), LinkedIn group (in English)
News, posts, articles and more!
Semantic Kernel (SK) is an SDK that integrates AI Large Language Models (LLMs) with conventional programming languages.
Bias in job descriptions refers to the use of language or content that may favor or exclude certain groups of people based on their gender, race, age, religion, or other characteristics.
Looking to improve your language and writing abilities? Look no further than the ChatGPT Prompt Book written by Oleksandr Krakovetskyi, CEO at DevRain with a help of ChatGPT.