AI Security Lab

The AI Security Lab specializes in security frameworks and guidelines for the safe integration of Artificial Intelligence. We conduct research to prevent data breaches and adversarial attacks that may arise during AI model deployment, establishing practical security standards that enable organizations to utilize AI with confidence.

Specific inquiry details. I was introduced to AutoML, which is said to make it easy to build machine learning models …
/
Specific inquiry details. I work in the information security team at my company and am in charge of security reviews.Our …
/
OWASP Top 10 for LLM Application LLM01. Prompt InjectionMalicious users may manipulate the LLM (GenAI) to redefine system prompts or …
/
Overview of RAG RAG (Retrieval-Augmented Generation) has become an essential component, alongside PEFT, in the development of GenAI (Large Language …
/
Recently, many domestic financial companies have been building GenAI (Generative AI, LLM) systems. However, discussions on LLM security vulnerabilities have …
/
Prompt Injection, one of the key vulnerabilities in GenAI (LLM) systems, appears as the first item in the OWASP Top …
/
In preparation for our GenAI (LLM) Red Team service, we organized a comprehensive set of detailed evaluation test cases aligned …
/
NameFull NameArchitectureBase ModelDevelopedTraining DatasetLib. & FrameworkUse CasesHF URLGithhub URLTimeSformerTimeSformer (Time-Space Transformer)TransformerVision Transformer (ViT)2021Evaluated on datasets like Kinetics-400 and Kinetics-600PyTorchVideo classification …
/
NameFull NameArchitectureBase ModelDevelopedTraining DatasetLib. & FrameworkUse CasesHF URLGithhub URLAudio Spectrogram TransformerAudio Spectrogram TransformerTransformerViT2021AudioSetPyTorch, Hugging Face TransformersAudio classification, sound event detectionhttps://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformerhttps://github.com/YuanGongND/astBarkBarkGPT-like, …
/