当前位置: 首页 > news >正文

什么是网站架构项目推广方案

什么是网站架构,项目推广方案,南宁网站推广哪家好,黑龙江电商网站建设文章目录 一、完整代码二、过程实现2.1 导包2.2 数据准备2.3 字符分词2.4 构建数据集2.5 定义模型2.6 模型训练2.7 模型推理 三、整体总结 采用RNN和unicode分词进行文本生成 一、完整代码 这里我们使用tensorflow实现,代码如下: # 完整代码在这里 imp…

文章目录

    • 一、完整代码
    • 二、过程实现
      • 2.1 导包
      • 2.2 数据准备
      • 2.3 字符分词
      • 2.4 构建数据集
      • 2.5 定义模型
      • 2.6 模型训练
      • 2.7 模型推理
    • 三、整体总结

采用RNN和unicode分词进行文本生成

一、完整代码

这里我们使用tensorflow实现,代码如下:

# 完整代码在这里
import tensorflow as tf
import keras_nlp
import numpy as nptokenizer = keras_nlp.tokenizers.UnicodeCodepointTokenizer(vocabulary_size=400)# tokens - ids
ids = tokenizer(['Why are you so funny?', 'how can i get you'])# ids - tokens
tokenizer.detokenize(ids)def split_input_target(sequence):input_text = sequence[:-1]target_text = sequence[1:]return input_text, target_text# 准备数据
text = open('./shakespeare.txt', 'rb').read().decode(encoding='utf-8')
dataset = tf.data.Dataset.from_tensor_slices(tokenizer(text))
dataset = dataset.batch(64, drop_remainder=True)
dataset = dataset.map(split_input_target).batch(64)input, ouput = dataset.take(1).get_single_element()# 定义模型d_model = 512
rnn_units = 1025class CustomModel(tf.keras.Model):def __init__(self, vocabulary_size, d_model, rnn_units):super().__init__(self)self.embedding = tf.keras.layers.Embedding(vocabulary_size, d_model)self.gru = tf.keras.layers.GRU(rnn_units, return_sequences=True, return_state=True)self.dense = tf.keras.layers.Dense(vocabulary_size, activation='softmax')def call(self, inputs, states=None, return_state=False, training=False):x = inputsx = self.embedding(x)if states is None:states = self.gru.get_initial_state(x)x, states = self.gru(x, initial_state=states, training=training)x = self.dense(x, training=training)if return_state:return x, stateselse:return xmodel = CustomModel(tokenizer.vocabulary_size(), d_model, rnn_units)# 查看模型结构
model(input)
model.summary()# 模型配置
model.compile(loss = tf.losses.SparseCategoricalCrossentropy(),optimizer='adam',metrics=['accuracy']
)# 模型训练
model.fit(dataset, epochs=3)# 模型推理
class InferenceModel(tf.keras.Model):def __init__(self, model, tokenizer):super().__init__(self)self.model = modelself.tokenizer = tokenizerdef generate(self, inputs, length, return_states=False):inputs = inputs = tf.constant(inputs)[tf.newaxis]states = Noneinput_ids = self.tokenizer(inputs).to_tensor()outputs = []for i in range(length):predicted_logits, states = model(inputs=input_ids, states=states, return_state=True)input_ids = tf.argmax(predicted_logits, axis=-1)outputs.append(input_ids[0][-1].numpy())outputs = self.tokenizer.detokenize(lst).numpy().decode('utf-8')if return_states:return outputs, stateselse:return outputsinfere = InferenceModel(model, tokenizer)# 开始推理
start_chars = 'hello'
outputs = infere.generate(start_chars, 1000)
print(start_chars + outputs)

二、过程实现

2.1 导包

先导包tensorflow, keras_nlp, numpy

import tensorflow as tf
import keras_nlp
import numpy as np

2.2 数据准备

数据来自莎士比亚的作品 storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt;我们将其下载下来存储为shakespeare.txt

2.3 字符分词

这里我们使用unicode分词:将所有字符都作为一个词来进行分词

tokenizer = keras_nlp.tokenizers.UnicodeCodepointTokenizer(vocabulary_size=400)# tokens - ids
ids = tokenizer(['Why are you so funny?', 'how can i get you'])# ids - tokens
tokenizer.detokenize(ids)

2.4 构建数据集

利用tokenizertext数据构建数据集

def split_input_target(sequence):input_text = sequence[:-1]target_text = sequence[1:]return input_text, target_texttext = open('./shakespeare.txt', 'rb').read().decode(encoding='utf-8')
dataset = tf.data.Dataset.from_tensor_slices(tokenizer(text))
dataset = dataset.batch(64, drop_remainder=True)
dataset = dataset.map(split_input_target).batch(64)input, ouput = dataset.take(1).get_single_element()

2.5 定义模型

d_model = 512
rnn_units = 1025class CustomModel(tf.keras.Model):def __init__(self, vocabulary_size, d_model, rnn_units):super().__init__(self)self.embedding = tf.keras.layers.Embedding(vocabulary_size, d_model)self.gru = tf.keras.layers.GRU(rnn_units, return_sequences=True, return_state=True)self.dense = tf.keras.layers.Dense(vocabulary_size, activation='softmax')def call(self, inputs, states=None, return_state=False, training=False):x = inputsx = self.embedding(x)if states is None:states = self.gru.get_initial_state(x)x, states = self.gru(x, initial_state=states, training=training)x = self.dense(x, training=training)if return_state:return x, stateselse:return xmodel = CustomModel(tokenizer.vocabulary_size(), d_model, rnn_units)# 查看模型结构
model(input)
model.summary()

2.6 模型训练

model.compile(loss = tf.losses.SparseCategoricalCrossentropy(),optimizer='adam',metrics=['accuracy']
)model.fit(dataset, epochs=3)

2.7 模型推理

定义一个InferenceModel进行模型推理配置;

class InferenceModel(tf.keras.Model):def __init__(self, model, tokenizer):super().__init__(self)self.model = modelself.tokenizer = tokenizerdef generate(self, inputs, length, return_states=False):inputs = inputs = tf.constant(inputs)[tf.newaxis]states = Noneinput_ids = self.tokenizer(inputs).to_tensor()outputs = []for i in range(length):predicted_logits, states = model(inputs=input_ids, states=states, return_state=True)input_ids = tf.argmax(predicted_logits, axis=-1)outputs.append(input_ids[0][-1].numpy())outputs = self.tokenizer.detokenize(lst).numpy().decode('utf-8')if return_states:return outputs, stateselse:return outputsinfere = InferenceModel(model, tokenizer)start_chars = 'hello'
outputs = infere.generate(start_chars, 1000)
print(start_chars + outputs)

生成结果如下所示,感觉很差:

hellonofur us:
medous, teserwomador.
walled o y.
as
t aderemowate tinievearetyedust. manonels,
w?
workeneastily.
watrenerdores aner'shra
palathermalod, te a y, s adousced an
ptit: mamerethus:
bas as t: uaruriryedinesm's lesoureris lares palit al ancoup, maly thitts?
b veatrt
watyeleditenchitr sts, on fotearen, medan ur
tiblainou-lele priniseryo, ofonet manad plenerulyo
thilyr't th
palezedorine.
ti dous slas, sed, ang atad t,
wanti shew.
e
upede wadraredorenksenche:
wedemen stamesly ateara tiafin t t pes:
t: tus mo at
io my.
ane hbrelely berenerusedus' m tr;
p outellilid ng
ait tevadwantstry.
arafincara, es fody
'es pra aluserelyonine
pales corseryea aburures
angab:
sunelyothe: s al, chtaburoly o oonis s tioute tt,
pro.
tedeslenali: s 't ing h
sh, age de, anet: hathes: s es'tht,
as:
wedly at s serinechamai:
mored t.
t monatht t athoumonches le.
chededondirineared
ter
p y
letinalys
ani
aconen,
t rs:
t;et, tes-
luste aly,
thonort aly one telus, s mpsantenam ranthinarrame! a
pul; bon
s fofuly

三、整体总结

RNN结合unicode分词能进行文本生成但是效果一言难尽!

http://www.mmbaike.com/news/35905.html

相关文章:

  • 有什么平面设计的网站百度指数在线查询小程序
  • 旅游景区网站模板他达拉非功效与作用主要会有哪些
  • 做网站赚什么钱成都计算机培训机构排名前十
  • 纯flash网站下载爱站
  • wordpress网站设计千锋教育学费一览表
  • 做网站专题的软件网络营销郑州优化推广公司
  • 网站开发 精品课程自学seo能找到工作吗
  • b2b定义网站关键字优化价格
  • 网站建设增值服务杭州seo哪家好
  • 八亿wap建站小红书软文案例
  • 做养生网站需要资质吗百度seo公司电话
  • wordpress 子菜单顺序海南seo顾问服务
  • 外网访问群晖wordpress阿亮seo技术
  • 北京 seo宁阳网站seo推广
  • 常州工厂网站建设百度风云榜游戏
  • 触屏网站meta标签seo教程培训班
  • 福建网站模板厦门seo外包平台
  • 做网站的不肯给ftp信息流广告推广
  • 网站开发实例 csdn最佳磁力吧ciliba磁力链
  • seo优化关键词是什么意思优化手机性能的软件
  • seo的中文是什么意思广州seo服务公司
  • 做网站推广的销售怎么打电话淘宝关键词优化
  • 有哪些游戏网站泰州seo公司
  • 如何在网站做电子报百度一下首页版
  • 简单的个人网站html枣庄网站建设制作
  • 一家三口的室内设计方案ppt营口seo
  • 怎么给网站开发后台旅游景点推广软文
  • qt做网站服务器广告seo是什么意思
  • 找山东制作app公司公司seo营销
  • 网站前端用什么做google推广技巧