Yahoo Web Search

Search results

  1. We’re on a journey to advance and democratize artificial intelligence through open source and open science.

    • Datasets

      Datasets - Hugging Face – The AI community building the...

    • Documentation

      Train and Deploy Transformer models with Amazon SageMaker...

    • Models

      Models - Hugging Face – The AI community building the...

    • Spaces

      Discover amazing ML apps made by the community

    • Log In

      Log In - Hugging Face – The AI community building the...

    • Forum

      Community Discussion, powered by Hugging Face <3. Hugging...

    • Learn

      Learn - Hugging Face – The AI community building the future.

    • Lllyasviel Sd_Control_Collection

      Collection of community SD control models for users to...

  2. We’re on a journey to advance and democratize artificial intelligence through open source and open science.

  3. Hugging FaceThe AI community building the future. Create a new model. From the website. Hub documentation. Take a first look at the Hub features. Programmatic access. Use the Hub’s Python client library. Getting started with our git and git-lfs interface. You can create a repository from the CLI (skip if you created a repo from the website)

  4. www.hugging-face.org › modelsHuggingFace Models

    HuggingFace Models is a prominent platform in the machine learning community, providing an extensive library of pre-trained models for various natural language processing (NLP) tasks.

    • Overview
    • Online demos
    • 100 projects using Transformers
    • Quick tour
    • Why should I use transformers?
    • Why shouldn't I use transformers?
    • Installation
    • Model architectures
    • Citation

    •📝 Text, for tasks like text classification, information extraction, question answering, summarization, translation, and text generation, in over 100 languages.

    •🖼️ Images, for tasks like image classification, object detection, and segmentation.

    •🗣️ Audio, for tasks like speech recognition and audio classification.

    Transformer models can also perform tasks on several modalities combined, such as table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering.

    🤗 Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and then share them with the community on our model hub. At the same time, each python module defining an architecture is fully standalone and can be modified to enable quick research experiments.

    🤗 Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration between them. It's straightforward to train your models with one before loading them for inference with the other.

    Transformers is more than a toolkit to use pretrained models: it's a community of projects built around it and the Hugging Face Hub. We want Transformers to enable developers, researchers, students, professors, engineers, and anyone else to build their dream projects.

    In order to celebrate the 100,000 stars of transformers, we have decided to put the spotlight on the community, and we have created the awesome-transformers page which lists 100 incredible projects built in the vicinity of transformers.

    To immediately use a model on a given input (text, image, audio, ...), we provide the pipeline API. Pipelines group together a pretrained model with the preprocessing that was used during that model's training. Here is how to quickly use a pipeline to classify positive versus negative texts:

    The second line of code downloads and caches the pretrained model used by the pipeline, while the third evaluates it on the given text. Here, the answer is "positive" with a confidence of 99.97%.

    Many tasks have a pre-trained pipeline ready to go, in NLP but also in computer vision and speech. For example, we can easily extract detected objects in an image:

    Here, we get a list of objects detected in the image, with a box surrounding the object and a confidence score. Here is the original image on the left, with the predictions displayed on the right:

    1.Easy-to-use state-of-the-art models:

    •High performance on natural language understanding & generation, computer vision, and audio tasks.

    •Low barrier to entry for educators and practitioners.

    •Few user-facing abstractions with just three classes to learn.

    •A unified API for using all our pretrained models.

    2.Lower compute costs, smaller carbon footprint:

    •This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving into additional abstractions/files.

    •The training API is not intended to work on any model but is optimized to work with the models provided by the library. For generic machine learning loops, you should use another library (possibly, Accelerate).

    With pip

    This repository is tested on Python 3.8+, Flax 0.4.1+, PyTorch 1.11+, and TensorFlow 2.6+. You should install 🤗 Transformers in a virtual environment. If you're unfamiliar with Python virtual environments, check out the user guide. First, create a virtual environment with the version of Python you're going to use and activate it. Then, you will need to install at least one of Flax, PyTorch, or TensorFlow. Please refer to TensorFlow installation page, PyTorch installation page and/or Flax and Jax installation pages regarding the specific installation command for your platform. When one of those backends has been installed, 🤗 Transformers can be installed using pip as follows: If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you must install the library from source.

    With conda

    🤗 Transformers can be installed using conda as follows: Follow the installation pages of Flax, PyTorch or TensorFlow to see how to install them with conda.

    All the model checkpoints provided by 🤗 Transformers are seamlessly integrated from the huggingface.co model hub, where they are uploaded directly by users and organizations.

    Current number of checkpoints:

    🤗 Transformers currently provides the following architectures (see here for a high-level summary of each them):

    1.ALBERT (from Google Research and the Toyota Technological Institute at Chicago) released with the paper ALBERT: A Lite BERT for Self-supervised Learning of Language Representations, by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.

    2.ALIGN (from Google Research) released with the paper Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision by Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig.

    3.AltCLIP (from BAAI) released with the paper AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities by Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell.

    We now have a paper you can cite for the 🤗 Transformers library:

  5. www.hugging-face.orgHugging Face

    Hugging Face is an innovative technology company and community at the forefront of artificial intelligence development.

  6. The AI community building the future. Hugging Face has 229 repositories available. Follow their code on GitHub.