TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

LSTM: How to Train Neural Networks to Write Like Lovecraft

10 点作者 strikingloo将近 6 年前

1 comment

strikingloo将近 6 年前
Hey guys, I&#x27;m the writer. As you can see from the post, I&#x27;m still very much learning.<p>What I want the most from this site is for more experienced people to help me out with some of my questions.<p>Here they come:<p>- Can you use Batch Normalization (the one from tf.keras) on an LSTM layer? Or will it break the model?<p>- How do you deal with extremely infrequent words if you do a word-based LSTM (with a one-hot encode of each word in the corpus?)? Do you remove them? Replace them? Cluster them?<p>- Do you think there&#x27;s any other architecture that would&#x27;ve had better results -while still not taking too long to train-?
评论 #20262275 未加载
评论 #20262979 未加载