Skip to content

🚀 Your Ultimate nHentai Doujinshi Companion! Effortlessly download, organize, and read your favorite doujins offline with a sleek, intuitive local UI.

License

Notifications You must be signed in to change notification settings

SongOfTheFallen/nhentai-downloader

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

nhentai Downloader

A self-hosted solution to build and browse your own doujinshi library.

This repository contains three independent pieces:

  • scraper/ – an asynchronous Python 3 scraper that downloads doujinshi from nhentai.net into a local manga/ folder.
  • backend/ – a small Node.js API serving the downloaded files and metadata.
  • frontend/ – a single‑page web interface generated with Vite and Vue.

The scraper is completely standalone. Use it to gather the content you want first, then run the backend and frontend to host the library.

Frontend Preview

nhentai-downloader-image-demo-censored

Features

  • Downloads doujinshi from nhentai as individual folders with JSON metadata.
  • Browse the collection in a responsive web UI.
  • Download any entry as a PDF or zipped archive.
  • Docker support for easy deployment.

Getting Started

1. Scrape your manga

Ensure Python 3.11+ is installed. Inside scraper/, adjust main.py to choose what to download and run:

cd scraper
pip install -r requirements.txt
python3 main.py

Downloaded files appear under manga/ (created automatically at the repository root).

2. Install web dependencies

npm run install:all
cp backend/.env.example backend/.env
cp frontend/.env.example frontend/.env

Edit the .env files to set your API key and optional password.

3. Start in development

npm run dev

The frontend runs on http://localhost:8787 and the API on http://localhost:5173. Update the ports inside the .env files if needed. All API requests must include the key defined in backend/.env using the Authorization header.

4. Build for production

npm run build

Serve the contents of frontend/dist on any static host and run the backend (using npm run prod inside backend/ or the Docker setup below).

Docker Deployment

Both components have ready‑to‑use docker-compose.yml files. From the repository root run:

# API
cd backend && docker compose up -d
# Frontend
cd ../frontend && docker compose up -d

The containers read the same .env files and mount ../manga to make your collection available.

API Summary

  • GET /api/manga – list all entries
  • POST /api/rescan – rebuild the cache after adding files
  • GET /api/stats – number of pages and library size
  • GET /api/manga/:id/archive – download as ZIP
  • GET /api/manga/:id/pdf – download as PDF

Static images are served from /manga.

Compatibility

The scraper targets Python 3.13 but also works on Python 3.11 and 3.12. Earlier versions are not supported. The Node backend requires Node.js 20 or later.

AI Reliance

Both the front-end and back-end components were generated entirely by AI, with no human-written code. Only the scraper component was hand-crafted.

Motivation

Once the scraper component was complete, I wanted to quickly create a front-end. However, thanks to the power of OpenAI Codex, the project soon evolved into a full-fledged, API-driven website with both front-end and back-end.

License

Released under the terms of the GNU General Public License v3.