当前位置: 首页 > news >正文

凡科网可以自己做网站吗网络营销网站有哪些

凡科网可以自己做网站吗,网络营销网站有哪些,网站速度慢的原因,网站系统找不到指定的文件在Google的Colab上面采用unsloth,trl等库,训练数据集来自Google的云端硬盘,微调llama3-8b模型,进行推理验证模型的微调效果。 保存模型到Google的云端硬盘可以下载到本地供其它使用。 准备工作:将训练数据集上传到google的云端硬盘…

        在Google的Colab上面采用unsloth,trl等库,训练数据集来自Google的云端硬盘,微调llama3-8b模型,进行推理验证模型的微调效果。

        保存模型到Google的云端硬盘可以下载到本地供其它使用。

准备工作:将训练数据集上传到google的云端硬盘根目录下,文件名就叫做train.json

train.json里面的数据格式如下:

[
  {
    "instruction": "你好",
    "output": "你好,我是智能助手胖胖"
  },
  {
    "instruction": "hello",
    "output": "Hello! I am 智能助手胖胖, an AI assistant developed by 丹宇码农. How can I assist you ?"
  }

......

]

采用unsloth库、trl库、transformers等库。

直接上代码:

%%capture
# Installs Unsloth, Xformers (Flash Attention) and all other packages!
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
!pip install --no-deps "xformers<0.0.26" trl peft accelerate bitsandbytesfrom unsloth import FastLanguageModel
import torch
max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.# 4bit pre quantized models we support for 4x faster downloading + no OOMs.
fourbit_models = ["unsloth/mistral-7b-bnb-4bit","unsloth/mistral-7b-instruct-v0.2-bnb-4bit","unsloth/llama-2-7b-bnb-4bit","unsloth/gemma-7b-bnb-4bit","unsloth/gemma-7b-it-bnb-4bit", # Instruct version of Gemma 7b"unsloth/gemma-2b-bnb-4bit","unsloth/gemma-2b-it-bnb-4bit", # Instruct version of Gemma 2b"unsloth/llama-3-8b-bnb-4bit", # [NEW] 15 Trillion token Llama-3
] # More models at https://huggingface.co/unslothmodel, tokenizer = FastLanguageModel.from_pretrained(model_name = "unsloth/llama-3-8b-bnb-4bit",max_seq_length = max_seq_length,dtype = dtype,load_in_4bit = load_in_4bit,# token = "hf_...", # use one if using gated models like meta-llama/Llama-2-7b-hf
)model = FastLanguageModel.get_peft_model(model,r = 16, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128target_modules = ["q_proj", "k_proj", "v_proj", "o_proj","gate_proj", "up_proj", "down_proj",],lora_alpha = 16,lora_dropout = 0, # Supports any, but = 0 is optimizedbias = "none",    # Supports any, but = "none" is optimized# [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes!use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long contextrandom_state = 3407,use_rslora = False,  # We support rank stabilized LoRAloftq_config = None, # And LoftQ
)alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.### Instruction:
{}### Input:
{}### Response:
{}"""EOS_TOKEN = tokenizer.eos_token # Must add EOS_TOKEN
def formatting_prompts_func(examples):instructions = examples["instruction"]outputs      = examples["output"]texts = []for instruction, output in zip(instructions, outputs):input = ""# Must add EOS_TOKEN, otherwise your generation will go on forever!text = alpaca_prompt.format(instruction, input, output) + EOS_TOKENtexts.append(text)return { "text" : texts, }
passfrom datasets import load_dataset
#dataset = load_dataset("yahma/alpaca-cleaned", split = "train")
#dataset = dataset.map(formatting_prompts_func, batched = True,)
from google.colab import drive
# 挂载云端硬盘,加载成功后,在左边的文件树中将会多一个 /content/drive/MyDrive/ 目录
drive.mount('/content/drive')# 加载本地数据集:
# 有instruction和output,input为空字符串
from datasets import load_datasetdata_home = r"/content/drive/MyDrive/"
data_dict = {"train": os.path.join(data_home, "train.json"),#"validation": os.path.join(data_home, "dev.json"),
}
dataset = load_dataset("json", data_files=data_dict, split = "train")
print(dataset[0])
dataset = dataset.map(formatting_prompts_func, batched = True,)from trl import SFTTrainer
from transformers import TrainingArgumentstrainer = SFTTrainer(model = model,tokenizer = tokenizer,train_dataset = dataset,dataset_text_field = "text",max_seq_length = max_seq_length,dataset_num_proc = 2,packing = False, # Can make training 5x faster for short sequences.args = TrainingArguments(per_device_train_batch_size = 2,gradient_accumulation_steps = 4,warmup_steps = 5,max_steps = 60,learning_rate = 2e-4,fp16 = not torch.cuda.is_bf16_supported(),bf16 = torch.cuda.is_bf16_supported(),logging_steps = 1,optim = "adamw_8bit",weight_decay = 0.01,lr_scheduler_type = "linear",seed = 3407,output_dir = "outputs",),
)# 开始微调训练
trainer_stats = trainer.train()#推理
# alpaca_prompt = Copied from above
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[alpaca_prompt.format("你是谁?", # instruction"", # input"", # output - leave this blank for generation!)
], return_tensors = "pt").to("cuda")outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)
tokenizer.batch_decode(outputs)#此处输出的答案,能明显看到就是自己训练的数据,而不是原来模型的输出。说明微调起作用了# 保存模型,改成挂接的云硬盘目录也可以保存到google的个人云存储空间,然后打开个人云存储空间下载到本地
model.save_pretrained("lora_model") # Local saving
tokenizer.save_pretrained("lora_model")# Merge to 16bit
if True: model.save_pretrained_merged("model", tokenizer, save_method = "merged_16bit",)

其实可以将.ipynb文件上传到个人云存储空间,双击这个文件就会打开colab,然后依次执行代码即可,随时可以增加、删除、修改,特别方便,还能免费使用GPU、CPU等资源,真的是广大AI爱好者的不错选择。

http://www.mmbaike.com/news/53198.html

相关文章:

  • 爱站网是什么seo免费外链工具
  • 化妆品网站制作媒体公关是做什么的
  • 网站如何做微信支付宝支付宝支付接口南京百度搜索优化
  • 投资公司网站建设方案如何进行品牌营销
  • 著名营销策划公司seo优化教程培训
  • 中文网站的seo怎么做google chrome浏览器
  • 建立微信商城网站关键词林俊杰mp3下载
  • 深圳网站制作 公司做网页多少钱一个页面
  • 网络推广活动策划seo案例分析方案
  • 便利的集团网站建设怎么推广软件
  • 精美网站开发方案深圳推广优化公司
  • 给网站做h5缓存机制上海网络seo公司
  • 购物网站设计流程图微信营销神器
  • 如何做外贸网站推广软文营销怎么写
  • 企业展示型网站源码网上打广告有哪些软件
  • 号网站开发搜索引擎调价工具哪个好
  • 靠谱的建站公司哪家专业厦门网络推广公司
  • 中国官方网站求几个好看的关键词
  • 网站中qq跳转怎么做的seo优化培训多少钱
  • 网站建设的总结seo关键词排行优化教程
  • 哪个网站做漂流瓶任务重庆seo推广外包
  • php网站商城源码各种推广平台
  • 网站开发开票内容写什么seo排名点击 seo查询
  • 中国建设银行人事网站怎么做网页宣传
  • 天津做网站美工建立网站要多少钱一年
  • 114做网站诈骗东莞seo关键词排名优化排名
  • 大型网站seo方法seo数据是什么意思
  • 班级网站建设百度做广告怎么做
  • 政务公开 政府网站建设seo友情链接
  • 如何建设网站济南兴田德润简介电话seo专员是什么