[go: nahoru, domu]

Skip to content
View EddyLuo1232's full-sized avatar
  • The Ohio State Univerisity
  • Columbus
  • 13:27 (UTC -12:00)

Block or report EddyLuo1232

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
13 stars written in Python
Clear filter

An open source implementation of CLIP.

Python 9,716 950 Updated Aug 19, 2024

An interactive NVIDIA-GPU process viewer and beyond, the one-stop solution for GPU process management.

Python 4,576 144 Updated Aug 7, 2024

[ICML'24] SeeAct is a system for generalist web agents that autonomously carry out tasks on any given website, with a focus on large multimodal models (LMMs) such as GPT-4V(ision).

Python 582 69 Updated Aug 26, 2024

The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models".

Python 200 34 Updated Aug 16, 2024

An Open Robustness Benchmark for Jailbreaking Language Models [arXiv 2024]

Python 164 16 Updated Aug 15, 2024
Python 49 8 Updated Jan 9, 2024

[ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting."

Python 33 Updated Jul 11, 2024

JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and further assess the robustness and safety of MLLMs against a variety of jail…

Python 28 3 Updated Jul 12, 2024

A lightweight library for large laguage model (LLM) jailbreaking defense.

Python 26 3 Updated Aug 16, 2024

The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".

Python 26 3 Updated Apr 7, 2024
Python 5 Updated Jan 20, 2024

Data and Code for the paper, Knowledge-to-Jailbreak: One Knowledge Point Worth One Attack.

Python 4 Updated Jun 28, 2024
Python 2 Updated Jun 12, 2024