Multi-Task Learning

最近在研究一个NLP任务,感觉用MTL(Multi-Task Learning,多任务学习)来解决会非常适合,所以就稍微调研了一下MTL,发现一篇比较好的文章,这里就简单理解了一下,现在翻译一下做个笔记,原文地址:http://ruder.io/multi-task/index.html#whydoesmtlwork,英文不错的话还是建议看原文。

首先什么是MTL,多任务学习就是将多个任务放在一起进行训练(它们会共享一些参数,并会将loss进行weighting)来获得更好的效果,获得更好的泛化能力。MTL已经被成功应用于很多领域,譬如natural language processing、speech recognition、computer vision、drug discovery等。在深度学习领域,它的实现主要有两种方法:hard parameter sharing和soft parameter sharing。Hard parameter sharing应用的更多些,也比较简单,就是把所有的任务的hidden layers的参数给共享出来,而且保持自己的task-specific输出层,如下图所示:

Hard parameter sharing

Hard parameter sharing极大降低了过拟合的风险,直觉上理解它就是如果我们同时学习很多任务,模型就必须学习到一个可以捕获所有任务特征的表示,从而它过拟合到其中一个任务的概率就大大降低了。

Soft parameter sharing共享方式稍微复杂些,每个模型都会有自己的隐藏层参数,但是任务间的隐藏层参数会加入一些正则化限制使得它们会比较相似,如图所示:

Soft parameter sharing

那么为什么MTL会有用呢,会有这么好效果呢,上文已经提到一些,这里详细列举下原因,也是这篇文章的主题:

  1. Implicit data augmentation
  2. Attention focusing
  3. Eavesdropping
  4. Representation bias
  5. Regularization

对于implicit data augmentation,主要是由于MTL的每个任务都会有自己的噪声分布,仅仅学习任务A的话模型只会集中精力在处理任务A的噪声分布上,而MTL就会想方设法去学习一个普适表示去适应所有的噪声模式从而提升泛化。

对于attention focusing,如果一个任务是非常的noisy或者样本很少,维度很高的情况下,单任务就会很难区分哪些是相关、哪些是不相关的特征,MTL由于添加了其他任务,对这个任务敏感的特征就会被其他任务给共享到,就可以帮助模型区分这些相关和不相关特征,从而辅助模型决策。

对于eavesdropping,主要就是这个任务学到的特征会对其他任务有用,而这个学到的特征可能在其他任务里面很难去学习到。

对于representation bias,MTL可以使模型偏好其他任务也偏好的表示,使得它可以更好地泛化到新的任务。因为理论上MTL在越多的任务上泛化的好,对于未知的任务就越大概率泛化得好,类比来说就是深度学习领域里面样本越多,模型泛化越好。

对于regularization,MTL共享参数本身就是一个正则化模型。

文首提到的文章里面还有很多其他内容,譬如MTL最近的工作、非神经网络模型的MTL等,有兴趣的可以去看看,这里到此为止~

— 2019-01-12 09:09

BERT系列3 – transformer

首先声明一下,这里的transformer并不是变形金刚,而是BERT的基本组成单元。这一部分应该是三个系列中最枯燥无味的,不过还是有必要记一下。

论文:Attention Is All You Need(名字起得不错)
git地址1:https://github.com/tensorflow/tensor2tensor
git地址2:https://github.com/google-research/bert

Transformer是BERT的基本组成单元,在系列2中我们把它看做是一个黑盒,这里我们深入理解一下这个黑盒,模型结构图如下:

从图上可以看出,transformer主要分为两部分,encoder和decoder。从全局看,encoder有N个identical layers,每个identical layer又有两个sub-layers,每个sub-layer又有Multi-Head Attention、Add&Norm、Feed Forward、Add&Norm,decoder大体相似,只不过多了一层Multi-Head Attention、Add&Norm。到这里应该还是云里雾里,不知道它到底是啥,那我们从流程来看。

首先,模型的输入首先会经过embedding,这里它使用的是learned embeddings,并且encoder和decoder使用的是同一个learned embeddings。
其次,embeddings会经过一个positional encoding,主要是为了注入位置信息,公式如下:

    \[\begin{split} PE_{pos,2i}=sin(pos/10000^{2i/d_{model}})\\ PE_{pos,2i+1}=cos(pos/10000^{2i/d_{model}}) \end{split}\]

pos是word所在位置,对dim 1 2 3 4 5 blabla可以直接根据i算出一个向量跟输入的embedding进行element-wise的相加,encoder和decoder都是如此。
下一步就到了Multi-Head Attention,文中首先介绍了Attention函数可以看做是Q、K、V到output的映射,具体可以参考论文Key-value Attention Mechanism for Neural Machine Translation part 3.2,本质上就是将一个向量划分为两个相同维度向量进行计算,在这里可以把Q看做weight,K看做key,是向量前半部分用于计算Softmax权重,V是value,是向量后半部分用来跟权重相乘求和。公式如下:

    \[Attention(Q,K,V)=softmax(\frac{QK^T}{\sqrt{d_k}})V\]

这里多了一个scaling factor\sqrt{d_k},文中解释了为什么要加这个因子,因为点乘会使Q、K多了一个数量级,使得softmax进入到极小梯度的地方(类似sigmoid曲线的两端),为了抵消这种影响才加了这个因子。
上文介绍的attention这里叫Scaled Dot-Product Attention,是上图Multi-Head Attention的基本元素,如下图所示:

理解起来不难,直接公式:

    \[\begin{split} MultiHead(Q,K,V)=Concat(head_1,...,head_h)W^O\\ where\ head_i=Attention(QW_i^Q,KW_i^K,VW_i^V) \end{split}\]

这里还要注意一下在decoder里面的第一层有一个mask,主要是为了使当前的decoder输入仅仅依靠左侧的输出,防止右侧的数据流入,它是个技术处理并不算一个结构特征。为了更容易理解,这里提醒一下transformer的是把所有文本一次输入给模型,并不是像RNN一个一个处理,从而获得很高的并行性。
再下一步是Add&Norm,相当于LayerNorm(x+Sublayer(x)),这里引入了residual connection,似乎它为什么有效学术界还没有定论。
在下一步是Feed Forward,公式如下:

    \[FFN(x)=max(0,xW_1+b_1)W_2+b_2\]

然后又是Add&Norm,encoder的identical layer已经完成,重复N次就是encoder了。
Decoder的所有单元都是复用encoder的单元,无需多做解释。

文中还花了一个part去解释为什么self-attention,原因有三,第一个是计算复杂度考虑,第二个是计算并行性,第三个是长依赖问题没了(最长路径)。似乎三个都是在说性能更好,当然实验结果也表现得更好,BERT非常完美地体现了,这里也贴一下论文结果:

还有一个side benefit,就是模型更容易解释。

最后强调一下,BERT里面使用的并不是上文介绍的全部,而仅仅是transformer的encoder,所以可以有较高的并行性,我觉得这也是为啥要加position embedding的原因,具体实现细节可以看BERT代码的transformer的transformer_model函数。

— 2018-12-14 17:16

BERT系列2 – pre-training

BERT:Bidirectional Encoder Representations from Transformers
论文下载地址:https://arxiv.org/abs/1810.04805

BERT如雷贯耳,刷新了很多项NLP记录,极大发挥了transfer learning的效果,google出精品!稍微感受下:它在GLUE数据集上获取7.6%的绝对提升,MultiNLI数据集上5.6%的绝对提升,SQuAD v1.1数据集上1.5%的绝对提升!可以说是万能NLP模型。接下来我们来详细看看BERT pre-training模块,fine-tuning已经在系列1中阐述,不做赘述。

Pre-training就是预训练一个模型,这个模型可以适用于很多NLP任务,譬如文本分类、NER、QA等。Google开源了它的代码,良心的是Google不仅预训练了英文模型,还预训练了中文模型,可以说是节省了我们大量财力(据说在TPU上预训练一把需要花费5w块)。我们可以将模型稍微修改一下就可以应用到很多NLP任务里面,可以说是相当方便,代码地址:https://github.com/google-research/bert

接下来我们进入主题,model architecture。如果我们不管transformer(会在系列3中阐述)是什么,把transformer当做是一个黑盒,那么它的结构就相当简单,一个多层的transformer,层数、宽度等等参数都是可以调节的,详见论文。
第一部分是模型的输入,论文进行了相当多的细节处理,如图所示:

第一个,它用了wordpiece embeddings,第二个它加入了positional embeddings,第三个它还加入了segment embeddings。论文中对后两个embeddings讲的不多,看代码可以知道positional embeddings是个位置编码,通过模型训练出来之后每个位置都会有一个跟word embedding一样维度的向量,加入到word embedding中,如下代码所示:

  if use_position_embeddings:
    assert_op = tf.assert_less_equal(seq_length, max_position_embeddings)
    with tf.control_dependencies([assert_op]):
      full_position_embeddings = tf.get_variable(
          name=position_embedding_name,
          shape=[max_position_embeddings, width],
          initializer=create_initializer(initializer_range))
      # Since the position embedding table is a learned variable, we create it
      # using a (long) sequence length `max_position_embeddings`. The actual
      # sequence length might be shorter than this, for faster training of
      # tasks that do not have long sequences.
      #   
      # So `full_position_embeddings` is effectively an embedding table
      # for position [0, 1, 2, ..., max_position_embeddings-1], and the current
      # sequence has positions [0, 1, 2, ... seq_length-1], so we can just
      # perform a slice.
      position_embeddings = tf.slice(full_position_embeddings, [0, 0], 
                                     [seq_length, -1])
      num_dims = len(output.shape.as_list())

      # Only the last two dimensions are relevant (`seq_length` and `width`), so
      # we broadcast among the first dimensions, which is typically just
      # the batch size.
      position_broadcast_shape = []
      for _ in range(num_dims - 2):
        position_broadcast_shape.append(1)
      position_broadcast_shape.extend([seq_length, width])
      position_embeddings = tf.reshape(position_embeddings,
                                       position_broadcast_shape)
      output += position_embeddings

为什么要加入positional embedding?原因就是多层transformer不像RNN没有包含位置信息。
segment embedding就更简单了,SEP分隔的第一段句子为0,第二段句子为1,还有个二维数组将这个0或者1映射到word embedding的维度,参数可以学习。代码里面还说了为什么有SEP还要segment embedding,它的解释是为了模型更容易学习到序列的概念。

  if use_token_type:
    if token_type_ids is None:
      raise ValueError("`token_type_ids` must be specified if"
                       "`use_token_type` is True.")
    token_type_table = tf.get_variable(
        name=token_type_embedding_name,
        shape=[token_type_vocab_size, width],
        initializer=create_initializer(initializer_range))
    # This vocab will be small so we always do one-hot here, since it is always
    # faster for a small vocabulary.
    flat_token_type_ids = tf.reshape(token_type_ids, [-1])
    one_hot_ids = tf.one_hot(flat_token_type_ids, depth=token_type_vocab_size)
    token_type_embeddings = tf.matmul(one_hot_ids, token_type_table)
    token_type_embeddings = tf.reshape(token_type_embeddings,
                                       [batch_size, seq_length, width])
    output += token_type_embeddings

将以上三个embeddings相加作为多层transformer的输入,整个模型就这样了。

怎么做分类?获取BERT输出第一个label做个LR。怎么做相似性计算?获取BERT输出第一个label做个LR。怎么做QA或者NER?获取所有输出对所有输出做LR。就这么简单。什么?怎么获取输出或者embedding?google给你写好啦:

  def get_pooled_output(self):
    return self.pooled_output

  def get_sequence_output(self):
    """Gets final hidden layer of encoder.

    Returns:
      float Tensor of shape [batch_size, seq_length, hidden_size] corresponding
      to the final hidden of the transformer encoder.
    """
    return self.sequence_output

  def get_all_encoder_layers(self):
    return self.all_encoder_layers

  def get_embedding_output(self):
    """Gets output of the embedding lookup (i.e., input to the transformer).

    Returns:
      float Tensor of shape [batch_size, seq_length, hidden_size] corresponding
      to the output of the embedding layer, after summing the word
      embeddings with the positional embeddings and the token type embeddings,
      then performing layer normalization. This is the input to the transformer.
    """
    return self.embedding_output

  def get_embedding_table(self):
    return self.embedding_table

当然到这里并没有结束,怎么去预训练这个模型?google又花了很多心思在这上面,这可能是这个模型能成功的很大一个原因,也给了我们不少启发。
两个无监督任务做pre-training:Masked LM和Next Sentence Prediction。
第一个任务MLM简单来说就是随机抹去15%的单词去预测这个单词是啥。细节来了,它并不是总是用[MASK]替换那些单词,替换只占了80%,剩余的10%替换成了一个随机单词,最后的10%保留了原句。Transformer并不知道预测哪个单词,所以它需要记住每个单词的上下文分布,并且被替换的单词不多,并没有破坏语义。
第二个任务就是判断句子A是否是B的前一句。这个任务对QA和NLI收益很大。

最后loss的定义就是两个task的loss求和平均。

实验结果这里就不多说了,反正就是牛!

— 2018-12-13 18:41

BERT系列1 – fine-tuning

代码地址:https://github.com/google-research/bert

学习BERT,我们从最简单的使用开始,Google大牛们已经把代码开源了,而且注释相当详细,很容易把模型跑起来(包括CPU)。但是仅仅会跑代码demo是不够的,还是需要学会定制代码,适配到自己的问题上做一些尝试。
这里简单介绍下fine-tuning的run_classifier.py脚本,因为它最简单、应用最广。
run_pretraining.py这里没有强大的机器,暂时跑不了,好在Google已经给我们预训练了一个中文模型chinese_L-12_H-768_A-12.zip,可以直接使用。run_squad.py应用面有限,理解了run_classifier.py看这个也不是问题。
这里简单的代码就不在赘述,主要集中在一些比较难理解的代码。

代码开头设置了几个数据集的Processor,都有一个共同基类,主要是获取训练集、测试集、labels等功能。对于想适配自己的应用来说,这里是一个很重要的修改点。拷贝一个Processor直接在Processor上面根据自己的数据集的排版进行代码适配就可以了,不是很难,主要是赋值guid、text_a、text_b还有label(为什么会有text_a、text_b?BERT后续系列会有解释)。
接下来就是tokenizer,这里看下代码注释就可以大概理解它是干啥的了

    """Tokenizes a piece of text into its word pieces.

    This uses a greedy longest-match-first algorithm to perform tokenization
    using the given vocabulary.

    For example:
      input = "unaffable"
      output = ["un", "##aff", "##able"]

    Args:
      text: A single token or whitespace separated tokens. This should have
        already been passed through `BasicTokenizer.

    Returns:
      A list of wordpiece tokens.
    """

然后就是BERT的一些配置,我们这里把它当做一个黑盒用,用兴趣的可以看下tensorflow官方文档。
然后就是BERT的定义啦model_fn_builder,这里还要往深层看一看,因为它里面会有定义fine-tuning的代码:

def create_model(bert_config, is_training, input_ids, input_mask, segment_ids,
                 labels, num_labels, use_one_hot_embeddings):
  """Creates a classification model."""
  model = modeling.BertModel(
      config=bert_config,
      is_training=is_training,
      input_ids=input_ids,
      input_mask=input_mask,
      token_type_ids=segment_ids,
      use_one_hot_embeddings=use_one_hot_embeddings)

  # In the demo, we are doing a simple classification task on the entire
  # segment.
  #
  # If you want to use the token-level output, use model.get_sequence_output()
  # instead.
  output_layer = model.get_pooled_output()

  hidden_size = output_layer.shape[-1].value

  output_weights = tf.get_variable(
      "output_weights", [num_labels, hidden_size],
      initializer=tf.truncated_normal_initializer(stddev=0.02))

  output_bias = tf.get_variable(
      "output_bias", [num_labels], initializer=tf.zeros_initializer())

  with tf.variable_scope("loss"):
    if is_training:
      # I.e., 0.1 dropout
      output_layer = tf.nn.dropout(output_layer, keep_prob=0.9)

    logits = tf.matmul(output_layer, output_weights, transpose_b=True)
    logits = tf.nn.bias_add(logits, output_bias)
    probabilities = tf.nn.softmax(logits, axis=-1)
    log_probs = tf.nn.log_softmax(logits, axis=-1)

    one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32)

    per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1)
    loss = tf.reduce_mean(per_example_loss)

    return (loss, per_example_loss, logits, probabilities)

可以看到这里的输出层代码就是一层LR。
接下来看是判断你的任务是训练还是评估还是预测,代码大同小异,我们看下预测的代码:

    predict_examples = processor.get_test_examples(FLAGS.data_dir)
    predict_file = os.path.join(FLAGS.output_dir, "predict.tf_record")
    file_based_convert_examples_to_features(predict_examples, label_list,
                                            FLAGS.max_seq_length, tokenizer,
                                            predict_file)

    tf.logging.info("***** Running prediction*****")
    tf.logging.info("  Num examples = %d", len(predict_examples))
    tf.logging.info("  Batch size = %d", FLAGS.predict_batch_size)

    if FLAGS.use_tpu:
      # Warning: According to tpu_estimator.py Prediction on TPU is an
      # experimental feature and hence not supported here
      raise ValueError("Prediction in TPU not supported")

    predict_drop_remainder = True if FLAGS.use_tpu else False
    predict_input_fn = file_based_input_fn_builder(
        input_file=predict_file,
        seq_length=FLAGS.max_seq_length,
        is_training=False,
        drop_remainder=predict_drop_remainder)

    result = estimator.predict(input_fn=predict_input_fn)

    output_predict_file = os.path.join(FLAGS.output_dir, "test_results.tsv")
    with tf.gfile.GFile(output_predict_file, "w") as writer:
      tf.logging.info("***** Predict results *****")
      for prediction in result:
        output_line = "\t".join(
            str(class_probability) for class_probability in prediction) + "\n"
        writer.write(output_line)

代码先把数据获取到,然后将数据写到tfrecord里面,然后再从tfrecord里面获取接口(file_based_input_fn_builder)输送到estimator里面,最后将结果写到output_predict_file里面,一气呵成。

总的来看,代码写的确实6,既显得高大上又易于理解,膜拜。膜拜之余,需要注意定制的时候Processor的处理、BERT output的的获取、BERT embedding的获取、fine-tuning代码的编写等细节。

Q:warmup参数是干啥的?
A:我的理解就是为了解决分布式环境训练时收敛慢的问题。

Q:fine-tuning的时候BERT模型的参数会更新吗?
A:这里没有看到任何将trainable设置为False的代码,应该是会更新的。

Q:有没有什么其他需要注意的?
A:当然有,需要了解BERT的这些接口,才能根据需求进行定制,例如NER的话就不能按照run_classifier.py来了:

  def get_pooled_output(self):
    return self.pooled_output

  def get_sequence_output(self):
    """Gets final hidden layer of encoder.

    Returns:
      float Tensor of shape [batch_size, seq_length, hidden_size] corresponding
      to the final hidden of the transformer encoder.
    """
    return self.sequence_output

  def get_all_encoder_layers(self):
    return self.all_encoder_layers

  def get_embedding_output(self):
    """Gets output of the embedding lookup (i.e., input to the transformer).

    Returns:
      float Tensor of shape [batch_size, seq_length, hidden_size] corresponding
      to the output of the embedding layer, after summing the word
      embeddings with the positional embeddings and the token type embeddings,
      then performing layer normalization. This is the input to the transformer.
    """
    return self.embedding_output

  def get_embedding_table(self):
    return self.embedding_table

Q:为什么要写到tfrecord里面再读出来?
A:这里要看下tfrecord的优越性了,它高效、集成、支持分布式环境、方便等,否则你还要自己实现batch读之类的东西。

— 2018-12-11 17:08

Jack ma – 2019-01-23 – Easy News

News


Armstrong Henry H Associates Lifted Its Air Products & Chemicals (APD) Stake; Selkirk …
The Hi New Ulm
… have unveiled an unstaffed car vending machine in China; 03/05/2018 – Jack Ma’s Ant Financial adds two new money market funds to its platform; …

Pretium Res (PVG) Share Price Declined While Sun Valley Gold Cut Holding by $1.76 Million; As …
The Hi New Ulm
… 08/04/2018 – China’s SenseTime closes $600 mln funding led by Alibaba; 09/04/2018 – Alibaba’s Jack Ma urges Facebook to fix privacy issues; …

Garrison Bradford & Associates Has Lowered Its Holding in Thermo Fisher Sci (TMO) by $488000 …
The Yomi Blog (blog)
… IN THE U.S. ONCE INVENTORIES RUN OUT – NIKKEI; 30/05/2018 – MEDIA- Jack Ma’s Ant Financial lifts funding to over $12 billion – Bloomberg; …

As Hartford Finl Svcs 7.875 Pfd (HGH) Share Value Were Volatile, Roosevelt Investment Group …
The Yomi Blog (blog)
… COOPERATION FRAMEWORK PACT WITH ALIBABA; 10/04/2018 – Alibaba founder Jack Ma says friction between U.S. and China to be expected; …

Tech conference DLD: Win China the race to the Artificial intelligence?
Wire News Fax
… however, highlights that the Chinese success so far, thanks mainly to private capital and entrepreneurship – in allusion to, for example, Jack Ma , …

Micro firms handed a boost, thanks to new breed of private banks
ecns
Although his company is still very small, Peng said he also wants to run a business for 102 years, just like the practice of Jack Ma , the founder of …

Alibaba Undertakes Aggressive Cost Cuts Amid Slowing Economy
Nasdaq
Alibaba Group Holding Limited BABA recently announced that it is looking to curb travel expenses and postpone the recruitment of new personnel …

Here’s the latest from Davos
CNN
The highlights from the business crowd are: Alibaba founder Jack Ma , BlackRock Chairman Larry Fink and Salesforce Chairman Marc Benioff.

Jack Ma featured in Foreign Policy magazine
ecns
Alibaba Founder Jack Ma has been listed among 2019’s list of top ‘100 global thinkers’ by Foreign Policy magazine. Jack Ma is the only Chinese …

IOC welcomes Alibaba Executive Chairman Jack Ma to Lausanne
Olympics
International Olympic Committee President Thomas Bach welcomed Jack Ma , the Executive Chairman of the Alibaba Group, to the IOC headquarters …

Amd – 2019-01-23 – 每日易讯

新闻


Two gaming monitors with AMD FreeSync are on sale for less than $300
Polygon
In the market for a reasonably priced, but still reasonably powerful gaming monitor? Online electronics retailer Monoprice is slashing prices on two …

Rumor: AMD Gonzalo APU for Next-Gen Game Consoles Leaks
PC Perspective
This is great news for AMD , who have been enjoying the royalties from the sales of consoles and could use the fresh injection of cash as gamers …

AMD Matisse CPUs and 5800MHz DDR4 memory modules referenced in AIDA64. Wait… what?
PCGamesN
The upcoming AMD Matisse CPUs are being name-checked in the latest beta update for benchmarking software, AIDA64. The upcoming AMD  …

AMD Radeon VII Will Have Excellent Linux Support From Day 1
Fossbytes
Talking about the performance of this consumer graphics card, AMD has said that Radeon VII is expected to deliver about 25% more performance as …

VIDEO: Visual benefits better with AMD over-treatment
Healio
WAIKOLOA, Hawaii — At Retina 2019 here, Carl D. Regillo, MD, discusses the latest data on anti-VEGF dosing regimens, which show that, over time, …

AMD Gonzalo APU in PS5, Xbox Next May Feature Navi Graphics, Zen Cores
ExtremeTech
AMD hasn’t formally announced that it has won the Xbox Next and PlayStation 5 contracts just yet, but it’s considered a more-or-less foregone …

Faulty molecular master switch may contribute to AMD
EurekAlert (press release)
This damage is similar to cellular effects observed in AMD , a common cause of vision loss among older Americans. The study was published today in …

HP introduces 2 more AMD -based education and enterprise Chromebooks
VentureBeat
The HP Chromebook 11A G6 Education Edition uses the AMD A4-9120C APU (accelerated processing unit, which combines graphics and processing …

Core blimey… When is an AMD CPU core not a CPU core? It’s now up to a jury of 12 to decide
The Register
A class-action lawsuit against AMD claiming false advertising over its ‘eight core’ Opteron and FX processors has been given the go-ahead by a …

AMD Radeon VII graphics card will support Linux on launch day
TechRadar
AMD’s freshly unveiled Radeon VII graphics card, which was shown off recently at CES, will come with support for Linux straight out of the gate when …

有了支持G-SYNC的显示器还需要开启垂直同步吗?
OFweek
NVIDIA显卡的此项技术叫G-Sync, AMD 显卡后来也支持类似的技术,取名Freesync。无论是G-Sync还是Freesync,其目的都是为了在杜绝画面撕裂的 …

不久前 AMD 发布了最新的7nm Radeon VII显卡
IT搜购网
据介绍,日本网站4gamer.net与 AMD 营销经理Adam Kozak进行了交谈,称 AMD 目前正在使用Radeon VII测试DirectML,并实现类似于DLSS的效果。

7nm甜品级显卡 AMD Navi现身macOS原始码:这是RX 600?
泡泡网
近日外媒在挖掘mac OS系统更新的原始代码中,出现了“ AMD Radeon X6000”以及Navi 9/10/12/16的字样。目前看来苹果方面已经收到 AMD 递送的Navi样 …

微星笔记本/台机为何不用 AMD 处理器?微星CEO说原因有三
cnBeta
AMD 今年就要迎来50周年纪念了,官方昨天发了一个海报告诉大家他们今年值得期待的新产品,除了7nm处理器及显卡之外, AMD 宣传的一个重点还有 …

网页


Faulty molecular master switch may contribute to AMD
Scienmag
This damage is similar to cellular effects observed in AMD , a common cause of vision loss among older Americans. The study was published today in …

Tim cook – 2019-01-23 – Easy News

News


Apple employees donated $125 million in charity in 2018
Times Now
… iPhone-maker wrote in a blog-post on Monday. Over the weekend, Apple CEO Tim Cook also became part of the volunteering efforts in San Jose.

The Pros And Cons of Apple Stock
Investorplace.com
2, Tim Cook released a letter that included the following statement: “Based on these estimates, our revenue will be lower than our original guidance for …

Apple’s 2020 iPhones will likely get upgraded displays after the XR disappoints
CNBC
However, CEO Tim Cook told CNBC earlier this month that the XR has been Apple’s best-selling iPhone model since it launched last fall.

Cook: Location, Funding Among Eleven Park Questions
Inside INdiana Business
Indianapolis-based KSM Location Advisors Chief Executive Officer Tim Cook and President Katie Culp say efforts like the planned $550 million Eleven …

Apple, CEO Tim Cook double down on privacy demands
Compliance Week (blog)
While Facebook and Google continue to come under fire for bungling users’ data privacy, Apple and its CEO, Tim Cook , remain on the offensive …

Tim Cook Calls For Data Broker Clearinghouse
MediaPost Communications
Apple CEO Tim Cook has pulled data brokers into the privacy discussion, calling on the Federal Trade Commission to set up a “data broker …

Apple’s Tim Cook meets with power brokers at Davos, says education efforts ‘for the people’
AppleInsider
Apple CEO Tim Cook is in Davos, Switzerland this week for the annual World Economic Forum, where corporate and political leaders gather to …

Tim Cook mingles with Microsoft’s Satya Nadella at World Economic Forum, touts Apple’s …
9to5Mac
Apple CEO Tim Cook is in Switzerland this week amid the annual World Economic Forum. According to local reports, Tim Cook has mingled with the …

These Are the Most Over and Underrated CEOs
Fortune
Three CEOs–Bezos, Tim Cook of Apple, and Jamie Dimon of JPMorgan Chase–managed to make it on to the most underrated and most overrated …

Apple: Don’t Blame Tim Cook
Seeking Alpha
Critics of Tim Cook say there has been no major follow on (or add on) product since he took over the CEO role. Well, they would be wrong, since Apple …

Microsoft – 2019-01-23 – Easy News

News


Microsoft Bolstering Security, Compliance With Microsoft 365 Add-Ons
eWeek
Microsoft 365 enterprise users will now be able to buy new optional security and compliance packages to add deeper security and compliance …

How to Use Microsoft Teams for Free | PCMag.com
PCMag.com
With the free flavor of Microsoft Teams, you get unlimited chats, audio and video calls, and 10GB of file storage for your entire team, plus 2GB of …

Microsoft , Intel Look to Cloud Sales as Earnings Season Begins
Bloomberg
Results from Intel, Microsoft and Texas Instruments Inc. will follow International Business Machines Corp. over the next two weeks, providing a window …

Microsoft AI is powering digital transformation in India to fuel economic and social growth
OnMSFT (blog)
At its annual Media & Analyst Days event in India, Microsoft India shared details about its progress on its vision to democratize AI in the country to fuel …

Microsoft’s CEO – This Is the Challenge of our Times
TheStreet.com
Microsoft (MSFT – Get Report) CEO Satya Nadella on Tuesday, Jan. 22, stressed that the decoupling between the economic and productivity growth …

Microsoft’s Code Jumper makes programming physical for children with visual impairments
TechCrunch
Microsoft just unloaded a whole bunch of news in time for the BETT education show. The most interesting bit of the bunch, however, is probably Code …

Why are we relying on tech overlords like Microsoft for affordable housing?
The Guardian
So, last Wednesday, when Microsoft announced a plan to dedicate $500m towards alleviating the affordable housing crisis in the area, one might …

Microsoft to start selling more Azure services directly starting in March
ZDNet
Microsoft is changing the way it will sell Azure services to small and midsize businesses (SMBs) starting in March. Microsoft will be advising customers …

Microsoft targets Chrome OS with $189 Windows 10 laptops for education
Digital Trends
Microsoft is gunning for Chromebooks in the education space by announcing seven new Windows 10 laptops that start at just $189. Targeted at …

Microsoft built a stylus for students as it goes aggressively after education market
CNBC
Microsoft is taking the market so seriously that it’s just created a stylus specifically for students. The Microsoft Classroom Pen, introduced Tuesday, …

Pages


Microsoft Office Home and Business 2019
SHI
Microsoft Office Home and Business 2019 – License – 1 PC/Mac – download – ESD – Win, Mac – All Languages.

Microsoft wants to hear your ideas on how to improve gaming on Windows
HardwareZone
There’s a Gaming Mode in Windows 10 but there’s definitely room for improvement as Microsoft isn’t known for resting on its laurels. So if you have a …

What’s New in EDU – Bett Edition: Announcing new Windows 10 devices and tools to drive better …
Microsoft Education Blog
We’re introducing seven new affordable Windows 10 devices to our portfolio for schools, faster assignments and grading tools in Microsoft Teams, and …

Events
Microsoft
See what’s happening at your Microsoft Store, with events for kids, students, business professionals and more.

Security Updates Guide
Microsoft Security Update Guide


Microsoft Redeem
Microsoft

微软 – 2019-01-23 – 每日易讯

新闻


微软 首席技术官:在21世纪你需要对人工智能有所了解
新浪网
【CNMO新闻】 微软 首席技术官凯文·斯科特相信,未来对人工智能的理解将帮助人们成为更好的公民,”我认为,要想在21世纪成为一名消息灵通的公民,你 …

Lumia 950 XL运行Win10 ARM,仿佛Surface Phone
新浪网
微软 已经停止支持Windows Phone,该公司计划在2019年12月停止对Windows 10 Mobile的技术支持。但是开发者用户社区仍在开展多个项目,这些项目 …

最前线丨 微软 公布HoloLens一项新专利,能让用户知道其他人的驾驶技术多烂
搜狐
最近, 微软 发出的MWC2019(Mobile World Congress,移动世界大会)邀请函暗示了或将发布Hololens 2。不过,这次发布的该项专利是否未来如何运用,将 …

微软 Office 2019在中国上市:办公更高效
新浪网
2019年1月22日,北京—— 微软 中国今日宣布,面向消费者及小型企业的Office 2019 正式上市。面向消费者及小型企业的Office 2019 包括Office 家庭和学生 …

微软 宣布Microsoft Walle应用将于2月28日退役
新浪网
与Apple Pay、Android Pay相同,Microsoft Wallet是 微软 为Windows 10生态准备的支付应用,但由于Windows Phone战略失败,导致Microsoft Wallet无用 …

Win10平台游戏体验将天翻地覆? 微软 :调研ing
新浪网
去年的时候, 微软 曾承诺过要通过改善Microsoft Store来提升PC端的游戏体验,而最近他们又再一次强调了这一行为的必要性,并希望能够获得用户的积极 …

微软 office 2019中国正式上市
新浪网
2019年1月22日, 微软 中国今日宣布,面向消费者及小型企业的Office 2019 正式上市。面向消费者及小型企业的Office 2019 包括Office 家庭和学生版2019 …

纳德拉如何实现 微软 的低调复兴?
财富中文网
微软 首席执行官萨蒂亚·纳德拉在《财富》杂志科技头脑风暴会议上讲话。图片来源:Kevin Moloney — Fortune Brainstorm TECH …

Surface Pro被NFL教练怒摔: 微软 尴尬解释
新浪网
随后这个行为被不少网友传到了社交网络上,为缓解尴尬, 微软 首席产品官Panos Panay第一时间分享推文表示不用担心设备,Surface Pro很抗摔。

微软 彻底告别移动操作系统
36kr
然而好景不长,短短几个月后,其地位就被加速转型且闷声发大财的 微软 取而代之了(当前亚马逊市值已超过了 微软 、苹果)。 但值得深思的是,和苹果当前的 …

网页


微软 将在2020年全面停止Windows7更新
InfoQ
微软 已经宣布要在2020年1月14日停止Windows 7系统的更新和安全支持,Windows用户需要在Windows 7和Windows 10之间进行抉择了。