库名称: transformers.js
基础模型: nielsr/vitpose-base-simple
任务类型: 关键点检测
https://huggingface.co/nielsr/vitpose-base-simple 包含ONNX权重,兼容Transformers.js。
使用说明 (Transformers.js)
若未安装,可通过NPM安装Transformers.js JavaScript库:
npm i @huggingface/transformers
示例: 使用onnx-community/vitpose-base-simple
进行姿态估计
import { AutoModel, AutoImageProcessor, RawImage } from '@huggingface/transformers';
const model_id = 'onnx-community/vitpose-base-simple';
const model = await AutoModel.from_pretrained(model_id);
const processor = await AutoImageProcessor.from_pretrained(model_id);
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/ryan-gosling.jpg';
const image = await RawImage.read(url);
const inputs = await processor(image);
const { heatmaps } = await model(inputs);
const boxes = [[[0, 0, image.width, image.height]]];
const results = processor.post_process_pose_estimation(heatmaps, boxes)[0][0];
console.log(results);
可选可视化输出(Node.js环境使用canvas
库):
import { createCanvas, createImageData } from 'canvas';
const canvas = createCanvas(image.width, image.height);
const ctx = canvas.getContext('2d');
const imageData = createImageData(image.rgba().data, image.width, image.height);
ctx.putImageData(imageData, 0, 0);
const points = results.keypoints;
ctx.lineWidth = 4;
ctx.strokeStyle = 'blue';
for (const [i, j] of model.config.edges) {
const [x1, y1] = points[i];
const [x2, y2] = points[j];
ctx.beginPath();
ctx.moveTo(x1, y1);
ctx.lineTo(x2, y2);
ctx.stroke();
}
ctx.fillStyle = 'red';
for (const [x, y] of points) {
ctx.beginPath();
ctx.arc(x, y, 8, 0, 2 * Math.PI);
ctx.fill();
}
import fs from 'fs';
const out = fs.createWriteStream('pose.png');
const stream = canvas.createPNGStream();
stream.pipe(out)
out.on('finish', () => console.log('PNG文件已生成'));
输入图像 |
输出图像 |
 |
 |
注:为ONNX权重创建独立仓库是临时方案,待WebML技术更成熟后调整。若需使模型适配Web,建议使用🤗 Optimum转换为ONNX格式,并参照本仓库结构(ONNX权重存放在onnx
子文件夹中)。