Commit 857ccd13 authored by Abdoulaye Thiam's avatar Abdoulaye Thiam

versionning

parents
*
!rembg
!setup.py
!setup.cfg
!requirements.txt
!requirements-cpu.txt
!requirements-gpu.txt
!versioneer.py
!README.md
# https://editorconfig.org/
root = true
[*]
indent_style = space
indent_size = 4
insert_final_newline = true
trim_trailing_whitespace = true
end_of_line = lf
charset = utf-8
rembg/_version.py export-subst
github: [danielgatis]
custom: ["https://www.buymeacoffee.com/danielgatis"]
---
name: Bug report
about: Create a report to help us improve
title: "[BUG] ..."
labels: bug
assignees: ""
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Images**
Input images to reproduce.
**OS Version:**
iOS 22
**Rembg version:**
v2.0.21
**Additional context**
Add any other context about the problem here.
---
name: Feature request
about: Suggest an idea for this project
title: "[FEATURE] ..."
labels: enhancement
assignees: ""
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
name: Close inactive issues
on:
schedule:
- cron: "30 1 * * *"
jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v5
with:
days-before-issue-stale: 30
days-before-issue-close: 14
stale-issue-label: "stale"
stale-issue-message: "This issue is stale because it has been open for 30 days with no activity."
close-issue-message: "This issue was closed because it has been inactive for 14 days since being marked as stale."
days-before-pr-stale: -1
days-before-pr-close: -1
repo-token: ${{ secrets.GITHUB_TOKEN }}
name: Lint
on: [pull_request, push]
jobs:
lint_python:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- run: pip install --upgrade pip wheel
- run: pip install bandit black flake8 flake8-bugbear flake8-comprehensions isort safety mypy
- run: mypy --install-types --non-interactive --ignore-missing-imports ./rembg
- run: bandit --recursive --skip B101,B104,B310,B311,B303 --exclude ./rembg/_version.py ./rembg
- run: black --force-exclude rembg/_version.py --check --diff ./rembg
- run: flake8 ./rembg --count --ignore=B008,C901,E203,E266,E731,F401,F811,F841,W503 --max-line-length=120 --show-source --statistics --exclude ./rembg/_version.py
- run: isort --check-only --profile black ./rembg
- run: safety check
name: Publish Docker image
on:
push:
tags:
- "v*.*.*"
jobs:
push_to_registry:
name: Push Docker image to Docker Hub
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Login to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Build and push
uses: docker/build-push-action@v2
with:
context: .
file: ./Dockerfile
push: true
tags: ${{ secrets.DOCKER_HUB_USERNAME }}/rembg:latest
name: Publish to Pypi
on:
push:
tags:
- "v*.*.*"
jobs:
push_to_pypi:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: 3.9
- name: "Installs dependencies"
run: |
python3 -m pip install --upgrade pip
python3 -m pip install setuptools wheel twine
- name: "Builds and uploads to PyPI"
run: |
python3 setup.py sdist bdist_wheel
python3 -m twine upload dist/*
env:
TWINE_USERNAME: ${{ secrets.PIPY_USERNAME }}
TWINE_PASSWORD: ${{ secrets.PIPY_PASSWORD }}
name: Run tests
on: [push]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.7", "3.8", "3.9", "3.10"]
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pytest
pip install -r requirements.txt
- name: Test with pytest
run: |
PYTHONPATH=$PYTHONPATH:. pytest .
# general things to ignore
build/
dist/
.venv/
.direnv/
*.spec
*.egg-info/
*.egg
*.py[cod]
__pycache__/
*.so
*~≈
.envrc
.python-version
.idea
.pytest_cache
# due to using tox and pytest
.tox
.cache
.mypy_cache
FROM nvidia/cuda:11.6.0-runtime-ubuntu18.04
ENV DEBIAN_FRONTEND noninteractive
RUN rm /etc/apt/sources.list.d/cuda.list || true
RUN rm /etc/apt/sources.list.d/nvidia-ml.list || true
RUN apt-key del 7fa2af80
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/3bf863cc.pub
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/7fa2af80.pub
RUN apt update -y
RUN apt upgrade -y
RUN apt install -y curl software-properties-common
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt install -y python3.9 python3.9-distutils
RUN curl https://bootstrap.pypa.io/get-pip.py | python3.9
WORKDIR /rembg
COPY . .
RUN python3.9 -m pip install .[gpu]
RUN mkdir -p ~/.u2net
RUN gdown https://drive.google.com/uc?id=1tNuFmLv0TSNDjYIkjEdeH1IWKQdUA4HR -O ~/.u2net/u2netp.onnx
RUN gdown https://drive.google.com/uc?id=1tCU5MM1LhRgGou5OpmpjBQbSrYIUoYab -O ~/.u2net/u2net.onnx
RUN gdown https://drive.google.com/uc?id=1ZfqwVxu-1XWC1xU1GHIP-FM_Knd_AX5j -O ~/.u2net/u2net_human_seg.onnx
RUN gdown https://drive.google.com/uc?id=15rKbQSXQzrKCQurUjZFg8HqzZad8bcyz -O ~/.u2net/u2net_cloth_seg.onnx
EXPOSE 5000
ENTRYPOINT ["rembg"]
CMD ["--help"]
MIT License
Copyright (c) 2020 Daniel Gatis
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
include MANIFEST.in
include LICENSE.txt
include README.md
include setup.py
include pyproject.toml
include requirements.txt
include requirements-gpu.txt
include versioneer.py
include rembg/_version.py
# Rembg
[![Downloads](https://pepy.tech/badge/rembg)](https://pepy.tech/project/rembg)
[![Downloads](https://pepy.tech/badge/rembg/month)](https://pepy.tech/project/rembg/month)
[![Downloads](https://pepy.tech/badge/rembg/week)](https://pepy.tech/project/rembg/week)
[![License](https://img.shields.io/badge/License-MIT-blue.svg)](https://img.shields.io/badge/License-MIT-blue.svg)
[![Hugging Face Spaces](https://img.shields.io/badge/🤗%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/KenjieDec/RemBG)
Rembg is a tool to remove images background. That is it.
<p style="display: flex;align-items: center;justify-content: center;">
<img src="https://raw.githubusercontent.com/danielgatis/rembg/master/examples/car-1.jpg" width="100" />
<img src="https://raw.githubusercontent.com/danielgatis/rembg/master/examples/car-1.out.png" width="100" />
<img src="https://raw.githubusercontent.com/danielgatis/rembg/master/examples/car-2.jpg" width="100" />
<img src="https://raw.githubusercontent.com/danielgatis/rembg/master/examples/car-2.out.png" width="100" />
<img src="https://raw.githubusercontent.com/danielgatis/rembg/master/examples/car-3.jpg" width="100" />
<img src="https://raw.githubusercontent.com/danielgatis/rembg/master/examples/car-3.out.png" width="100" />
</p>
<p style="display: flex;align-items: center;justify-content: center;">
<img src="https://raw.githubusercontent.com/danielgatis/rembg/master/examples/animal-1.jpg" width="100" />
<img src="https://raw.githubusercontent.com/danielgatis/rembg/master/examples/animal-1.out.png" width="100" />
<img src="https://raw.githubusercontent.com/danielgatis/rembg/master/examples/animal-2.jpg" width="100" />
<img src="https://raw.githubusercontent.com/danielgatis/rembg/master/examples/animal-2.out.png" width="100" />
<img src="https://raw.githubusercontent.com/danielgatis/rembg/master/examples/animal-3.jpg" width="100" />
<img src="https://raw.githubusercontent.com/danielgatis/rembg/master/examples/animal-3.out.png" width="100" />
</p>
<p style="display: flex;align-items: center;justify-content: center;">
<img src="https://raw.githubusercontent.com/danielgatis/rembg/master/examples/girl-1.jpg" width="100" />
<img src="https://raw.githubusercontent.com/danielgatis/rembg/master/examples/girl-1.out.png" width="100" />
<img src="https://raw.githubusercontent.com/danielgatis/rembg/master/examples/girl-2.jpg" width="100" />
<img src="https://raw.githubusercontent.com/danielgatis/rembg/master/examples/girl-2.out.png" width="100" />
<img src="https://raw.githubusercontent.com/danielgatis/rembg/master/examples/girl-3.jpg" width="100" />
<img src="https://raw.githubusercontent.com/danielgatis/rembg/master/examples/girl-3.out.png" width="100" />
</p>
**If this project has helped you, please consider making a [donation](https://www.buymeacoffee.com/danielgatis).**
### Installation
CPU support:
```bash
pip install rembg
```
GPU support:
```bash
pip install rembg[gpu]
```
### Usage as a cli
Remove the background from a remote image
```bash
curl -s http://input.png | rembg i > output.png
```
Remove the background from a local file
```bash
rembg i path/to/input.png path/to/output.png
```
Remove the background from all images in a folder
```bash
rembg p path/to/input path/to/output
```
### Usage as a server
Start the server
```bash
rembg s
```
And go to:
```
http://localhost:5000/docs
```
Image with background:
```
https://upload.wikimedia.org/wikipedia/commons/thumb/9/9a/Gull_portrait_ca_usa.jpg/1280px-Gull_portrait_ca_usa.jpg
```
Image without background:
```
http://localhost:5000/?url=https://upload.wikimedia.org/wikipedia/commons/thumb/9/9a/Gull_portrait_ca_usa.jpg/1280px-Gull_portrait_ca_usa.jpg
```
Also you can send the file as a FormData (multipart/form-data):
```html
<form
action="http://localhost:5000"
method="post"
enctype="multipart/form-data"
>
<input type="file" name="file" />
<input type="submit" value="upload" />
</form>
```
### Usage as a library
Input and output as bytes
```python
from rembg import remove
input_path = 'input.png'
output_path = 'output.png'
with open(input_path, 'rb') as i:
with open(output_path, 'wb') as o:
input = i.read()
output = remove(input)
o.write(output)
```
Input and output as a PIL image
```python
from rembg import remove
from PIL import Image
input_path = 'input.png'
output_path = 'output.png'
input = Image.open(input_path)
output = remove(input)
output.save(output_path)
```
Input and output as a numpy array
```python
from rembg import remove
import cv2
input_path = 'input.png'
output_path = 'output.png'
input = cv2.imread(input_path)
output = remove(input)
cv2.imwrite(output_path, output)
```
### Usage as a docker
Try this:
```
docker run -p 5000:5000 danielgatis/rembg s
```
Image with background:
```
https://upload.wikimedia.org/wikipedia/commons/thumb/9/9a/Gull_portrait_ca_usa.jpg/1280px-Gull_portrait_ca_usa.jpg
```
Image without background:
```
http://localhost:5000/?url=https://upload.wikimedia.org/wikipedia/commons/thumb/9/9a/Gull_portrait_ca_usa.jpg/1280px-Gull_portrait_ca_usa.jpg
```
### Models
All models are downloaded and saved in the user home folder in the `.u2net` directory.
The available models are:
- u2net ([download](https://drive.google.com/uc?id=1tCU5MM1LhRgGou5OpmpjBQbSrYIUoYab) - [alternative](http://depositfiles.com/files/ltxbqa06w), [source](https://github.com/xuebinqin/U-2-Net)): A pre-trained model for general use cases.
- u2netp ([download](https://drive.google.com/uc?id=1tNuFmLv0TSNDjYIkjEdeH1IWKQdUA4HR) - [alternative](http://depositfiles.com/files/0y9i0r2fy), [source](https://github.com/xuebinqin/U-2-Net)): A lightweight version of u2net model.
- u2net_human_seg ([download](https://drive.google.com/uc?id=1ZfqwVxu-1XWC1xU1GHIP-FM_Knd_AX5j) - [alternative](http://depositfiles.com/files/6spp8qpey), [source](https://github.com/xuebinqin/U-2-Net)): A pre-trained model for human segmentation.
- u2net_cloth_seg ([download](https://drive.google.com/uc?id=15rKbQSXQzrKCQurUjZFg8HqzZad8bcyz) - [alternative](http://depositfiles.com/files/l3z3cxetq), [source](https://github.com/levindabhi/cloth-segmentation)): A pre-trained model for Cloths Parsing from human portrait. Here clothes are parsed into 3 category: Upper body, Lower body and Full body.
#### How to train your own model
If You need more fine tunned models try this:
https://github.com/danielgatis/rembg/issues/193#issuecomment-1055534289
### Advance usage
Sometimes it is possible to achieve better results by turning on alpha matting. Example:
```bash
curl -s http://input.png | rembg i -a -ae 15 > output.png
```
<table>
<thead>
<tr>
<td>Original</td>
<td>Without alpha matting</td>
<td>With alpha matting (-a -ae 15)</td>
</tr>
</thead>
<tbody>
<tr>
<td><img src="https://raw.githubusercontent.com/danielgatis/rembg/master/examples/food-1.jpg"/></td>
<td><img src="https://raw.githubusercontent.com/danielgatis/rembg/master/examples/food-1.out.jpg"/></td>
<td><img src="https://raw.githubusercontent.com/danielgatis/rembg/master/examples/food-1.out.alpha.jpg"/></td>
</tr>
</tbody>
</table>
### In the cloud
Please contact me at danielgatis@gmail.com if you need help to put it on the cloud.
### References
- https://arxiv.org/pdf/2005.09007.pdf
- https://github.com/NathanUA/U-2-Net
- https://github.com/pymatting/pymatting
### Buy me a coffee
Liked some of my work? Buy me a coffee (or more likely a beer)
<a href="https://www.buymeacoffee.com/danielgatis" target="_blank"><img src="https://bmc-cdn.nyc3.digitaloceanspaces.com/BMC-button-images/custom_images/orange_img.png" alt="Buy Me A Coffee" style="height: auto !important;width: auto !important;"></a>
### License
Copyright (c) 2020-present [Daniel Gatis](https://github.com/danielgatis)
Licensed under [MIT License](./LICENSE.txt)
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
[build-system]
# These are the assumed default build requirements from pip:
# https://pip.pypa.io/en/stable/reference/pip/#pep-517-and-518-support
requires = ["setuptools>=40.8.0", "wheel"]
build-backend = "setuptools.build_meta"
[versioneer]
VCS = "git"
style = "pep440"
versionfile_source = "rembg/_version.py"
versionfile_build = "rembg/_version.py"
tag_prefix = "v"
parentdir_prefix = "rembg-"
[pytest]
filterwarnings =
ignore::DeprecationWarning
from rembg.cli import main
if __name__ == "__main__":
main()
from . import _version
__version__ = _version.get_versions()["version"]
from .bg import remove
This diff is collapsed.
import io
from enum import Enum
from typing import List, Optional, Union
import numpy as np
from cv2 import (
BORDER_DEFAULT,
MORPH_ELLIPSE,
MORPH_OPEN,
GaussianBlur,
getStructuringElement,
morphologyEx,
)
from PIL import Image
from PIL.Image import Image as PILImage
from pymatting.alpha.estimate_alpha_cf import estimate_alpha_cf
from pymatting.foreground.estimate_foreground_ml import estimate_foreground_ml
from pymatting.util.util import stack_images
from scipy.ndimage import binary_erosion
from .session_base import BaseSession
from .session_factory import new_session
kernel = getStructuringElement(MORPH_ELLIPSE, (3, 3))
class ReturnType(Enum):
BYTES = 0
PILLOW = 1
NDARRAY = 2
def alpha_matting_cutout(
img: PILImage,
mask: PILImage,
foreground_threshold: int,
background_threshold: int,
erode_structure_size: int,
) -> PILImage:
if img.mode == "RGBA" or img.mode == "CMYK":
img = img.convert("RGB")
img = np.asarray(img)
mask = np.asarray(mask)
is_foreground = mask > foreground_threshold
is_background = mask < background_threshold
structure = None
if erode_structure_size > 0:
structure = np.ones(
(erode_structure_size, erode_structure_size), dtype=np.uint8
)
is_foreground = binary_erosion(is_foreground, structure=structure)
is_background = binary_erosion(is_background, structure=structure, border_value=1)
trimap = np.full(mask.shape, dtype=np.uint8, fill_value=128)
trimap[is_foreground] = 255
trimap[is_background] = 0
img_normalized = img / 255.0
trimap_normalized = trimap / 255.0
alpha = estimate_alpha_cf(img_normalized, trimap_normalized)
foreground = estimate_foreground_ml(img_normalized, alpha)
cutout = stack_images(foreground, alpha)
cutout = np.clip(cutout * 255, 0, 255).astype(np.uint8)
cutout = Image.fromarray(cutout)
return cutout
def naive_cutout(img: PILImage, mask: PILImage) -> PILImage:
empty = Image.new("RGBA", (img.size), 0)
cutout = Image.composite(img, empty, mask)
return cutout
def get_concat_v_multi(imgs: List[PILImage]) -> PILImage:
pivot = imgs.pop(0)
for im in imgs:
pivot = get_concat_v(pivot, im)
return pivot
def get_concat_v(img1: PILImage, img2: PILImage) -> PILImage:
dst = Image.new("RGBA", (img1.width, img1.height + img2.height))
dst.paste(img1, (0, 0))
dst.paste(img2, (0, img1.height))
return dst
def post_process(mask: np.ndarray) -> np.ndarray:
"""
Post Process the mask for a smooth boundary by applying Morphological Operations
Research based on paper: https://www.sciencedirect.com/science/article/pii/S2352914821000757
args:
mask: Binary Numpy Mask
"""
mask = morphologyEx(mask, MORPH_OPEN, kernel)
mask = GaussianBlur(mask, (5, 5), sigmaX=2, sigmaY=2, borderType=BORDER_DEFAULT)
mask = np.where(mask < 127, 0, 255).astype(np.uint8) # convert again to binary
return mask
def remove(
data: Union[bytes, PILImage, np.ndarray],
alpha_matting: bool = False,
alpha_matting_foreground_threshold: int = 240,
alpha_matting_background_threshold: int = 10,
alpha_matting_erode_size: int = 10,
session: Optional[BaseSession] = None,
only_mask: bool = False,
post_process_mask: bool = False,
) -> Union[bytes, PILImage, np.ndarray]:
if isinstance(data, PILImage):
return_type = ReturnType.PILLOW
img = data
elif isinstance(data, bytes):
return_type = ReturnType.BYTES
img = Image.open(io.BytesIO(data))
elif isinstance(data, np.ndarray):
return_type = ReturnType.NDARRAY
img = Image.fromarray(data)
else:
raise ValueError("Input type {} is not supported.".format(type(data)))
if session is None:
session = new_session("u2net")
masks = session.predict(img)
cutouts = []
for mask in masks:
if post_process_mask:
mask = Image.fromarray(post_process(np.array(mask)))
if only_mask:
cutout = mask
elif alpha_matting:
try:
cutout = alpha_matting_cutout(
img,
mask,
alpha_matting_foreground_threshold,
alpha_matting_background_threshold,
alpha_matting_erode_size,
)
except ValueError:
cutout = naive_cutout(img, mask)
else:
cutout = naive_cutout(img, mask)
cutouts.append(cutout)
cutout = img
if len(cutouts) > 0:
cutout = get_concat_v_multi(cutouts)
if ReturnType.PILLOW == return_type:
return cutout
if ReturnType.NDARRAY == return_type:
return np.asarray(cutout)
bio = io.BytesIO()
cutout.save(bio, "PNG")
bio.seek(0)
return bio.read()
This diff is collapsed.
from typing import Dict, List, Tuple
import numpy as np
import onnxruntime as ort
from PIL import Image
from PIL.Image import Image as PILImage
class BaseSession:
def __init__(self, model_name: str, inner_session: ort.InferenceSession):
self.model_name = model_name
self.inner_session = inner_session
def normalize(
self,
img: PILImage,
mean: Tuple[float, float, float],
std: Tuple[float, float, float],
size: Tuple[int, int],
) -> Dict[str, np.ndarray]:
im = img.convert("RGB").resize(size, Image.Resampling.LANCZOS)
im_ary = np.array(im)
im_ary = im_ary / np.max(im_ary)
tmpImg = np.zeros((im_ary.shape[0], im_ary.shape[1], 3))
tmpImg[:, :, 0] = (im_ary[:, :, 0] - mean[0]) / std[0]
tmpImg[:, :, 1] = (im_ary[:, :, 1] - mean[1]) / std[1]
tmpImg[:, :, 2] = (im_ary[:, :, 2] - mean[2]) / std[2]
tmpImg = tmpImg.transpose((2, 0, 1))
return {
self.inner_session.get_inputs()[0]
.name: np.expand_dims(tmpImg, 0)
.astype(np.float32)
}
def predict(self, img: PILImage) -> List[PILImage]:
raise NotImplementedError
from typing import List
import numpy as np
from PIL import Image
from PIL.Image import Image as PILImage
from scipy.special import log_softmax
from .session_base import BaseSession
pallete1 = [
0,
0,
0,
255,
255,
255,
0,
0,
0,
0,
0,
0,
]
pallete2 = [
0,
0,
0,
0,
0,
0,
255,
255,
255,
0,
0,
0,
]
pallete3 = [
0,
0,
0,
0,
0,
0,
0,
0,
0,
255,
255,
255,
]
class ClothSession(BaseSession):
def predict(self, img: PILImage) -> List[PILImage]:
ort_outs = self.inner_session.run(
None, self.normalize(img, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), (768, 768))
)
pred = ort_outs
pred = log_softmax(pred[0], 1)
pred = np.argmax(pred, axis=1, keepdims=True)
pred = np.squeeze(pred, 0)
pred = np.squeeze(pred, 0)
mask = Image.fromarray(pred.astype("uint8"), mode="L")
mask = mask.resize(img.size, Image.LANCZOS)
masks = []
mask1 = mask.copy()
mask1.putpalette(pallete1)
mask1 = mask1.convert("RGB").convert("L")
masks.append(mask1)
mask2 = mask.copy()
mask2.putpalette(pallete2)
mask2 = mask2.convert("RGB").convert("L")
masks.append(mask2)
mask3 = mask.copy()
mask3.putpalette(pallete3)
mask3 = mask3.convert("RGB").convert("L")
masks.append(mask3)
return masks
import hashlib
import os
import sys
from contextlib import redirect_stdout
from pathlib import Path
from typing import Type
import gdown
import onnxruntime as ort
from .session_base import BaseSession
from .session_cloth import ClothSession
from .session_simple import SimpleSession
def new_session(model_name: str) -> BaseSession:
session_class: Type[BaseSession]
if model_name == "u2netp":
md5 = "8e83ca70e441ab06c318d82300c84806"
url = "https://drive.google.com/uc?id=1tNuFmLv0TSNDjYIkjEdeH1IWKQdUA4HR"
session_class = SimpleSession
elif model_name == "u2net":
md5 = "60024c5c889badc19c04ad937298a77b"
url = "https://drive.google.com/uc?id=1tCU5MM1LhRgGou5OpmpjBQbSrYIUoYab"
session_class = SimpleSession
elif model_name == "u2net_human_seg":
md5 = "c09ddc2e0104f800e3e1bb4652583d1f"
url = "https://drive.google.com/uc?id=1ZfqwVxu-1XWC1xU1GHIP-FM_Knd_AX5j"
session_class = SimpleSession
elif model_name == "u2net_cloth_seg":
md5 = "2434d1f3cb744e0e49386c906e5a08bb"
url = "https://drive.google.com/uc?id=15rKbQSXQzrKCQurUjZFg8HqzZad8bcyz"
session_class = ClothSession
else:
assert AssertionError(
"Choose between u2net, u2netp, u2net_human_seg or u2net_cloth_seg"
)
#home = os.getenv("U2NET_HOME", os.path.join("~", ".u2net"))
#path = Path(home).expanduser() / f"{model_name}.onnx"
#path=Path("../models/").expanduser() / f"{model_name}.onnx"
#path.parents[0].mkdir(parents=True, exist_ok=True)
path="rembg/model-mrz/" + f"{model_name}.onnx"
#if not path.exists():
if not os.path.exists(path):
with redirect_stdout(sys.stderr):
gdown.download(url, str(path), use_cookies=False)
else:
hashing = hashlib.new("md5", path.read_bytes(), usedforsecurity=False)
if hashing.hexdigest() != md5:
with redirect_stdout(sys.stderr):
gdown.download(url, str(path), use_cookies=False)
sess_opts = ort.SessionOptions()
if "OMP_NUM_THREADS" in os.environ:
sess_opts.inter_op_num_threads = int(os.environ["OMP_NUM_THREADS"])
return session_class(
model_name,
ort.InferenceSession(
str(path), providers=ort.get_available_providers(), sess_options=sess_opts
),
)
from typing import List
import numpy as np
from PIL import Image
from PIL.Image import Image as PILImage
from .session_base import BaseSession
class SimpleSession(BaseSession):
def predict(self, img: PILImage) -> List[PILImage]:
ort_outs = self.inner_session.run(
None,
self.normalize(
img, (0.485, 0.456, 0.406), (0.229, 0.224, 0.225), (320, 320)
),
)
pred = ort_outs[0][:, 0, :, :]
ma = np.max(pred)
mi = np.min(pred)
pred = (pred - mi) / (ma - mi)
pred = np.squeeze(pred)
mask = Image.fromarray((pred * 255).astype("uint8"), mode="L")
mask = mask.resize(img.size, Image.Resampling.LANCZOS)
return [mask]
onnxruntime-gpu==1.12.1
aiohttp==3.8.1
asyncer==0.0.1
click==8.1.3
fastapi==0.80.0
filetype==1.1.0
gdown==4.5.1
imagehash==4.2.1
numpy==1.21.6
onnxruntime==1.12.1
opencv-python-headless==4.6.0.66
pillow==9.2.0
pymatting==1.1.8
python-multipart==0.0.5
scikit-image==0.19.3
scipy==1.7.3
tqdm==4.64.0
uvicorn==0.18.3
watchdog==2.1.9
[metadata]
# This includes the license file(s) in the wheel.
# https://wheel.readthedocs.io/en/stable/user_guide.html#including-license-files-in-the-generated-wheel-file
license_files = LICENSE.txt
# See the docstring in versioneer.py for instructions. Note that you must
# re-run 'versioneer.py setup' after changing this section, and commit the
# resulting files.
[versioneer]
VCS = git
style = pep440
versionfile_source = rembg/_version.py
versionfile_build = rembg/_version.py
tag_prefix = v
parentdir_prefix = rembg-
import os
import pathlib
import sys
sys.path.append(os.path.dirname(__file__))
from setuptools import find_packages, setup
import versioneer
here = pathlib.Path(__file__).parent.resolve()
long_description = (here / "README.md").read_text(encoding="utf-8")
with open(here / "requirements.txt") as f:
requireds = f.read().splitlines()
with open(here / "requirements-gpu.txt") as f:
gpu_requireds = f.read().splitlines()
setup(
name="rembg",
description="Remove image background",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/danielgatis/rembg",
author="Daniel Gatis",
author_email="danielgatis@gmail.com",
classifiers=[
"License :: OSI Approved :: MIT License",
],
keywords="remove, background, u2net",
packages=["rembg"],
python_requires=">=3.7",
install_requires=requireds,
entry_points={
"console_scripts": [
"rembg=rembg.cli:main",
],
},
extras_require={
"gpu": gpu_requireds,
},
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
)
from io import BytesIO
from pathlib import Path
from imagehash import average_hash
from PIL import Image
from rembg import remove
here = Path(__file__).parent.resolve()
def test_remove():
image = Path(here / ".." / "examples" / "animal-1.jpg").read_bytes()
expected = Path(here / ".." / "examples" / "animal-1.out.png").read_bytes()
actual = remove(image)
actual_hash = average_hash(Image.open(BytesIO(actual)))
expected_hash = average_hash(Image.open(BytesIO(expected)))
assert actual_hash == expected_hash
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment