xml version="1.0" encoding="utf-8" standalone="yes"12bet++博客-杰http://www.fometaux.com/guijie/杰哥好,哈哈!zh-cnMon, 24 Jun 2019 22:12:57 GMTMon, 24 Jun 2019 22:12:57 GMT6012bet++博客-杰http://www.fometaux.com/guijie/archive/2019/06/20/216430.html杰哥杰哥Wed, 19 Jun 2019 17:44:00 GMThttp://www.fometaux.com/guijie/archive/2019/06/20/216430.htmlhttp://www.fometaux.com/guijie/comments/216430.htmlhttp://www.fometaux.com/guijie/archive/2019/06/20/216430.html#Feedback0http://www.fometaux.com/guijie/comments/commentRss/216430.htmlhttp://www.fometaux.com/guijie/services/trackbacks/216430.htmlhttps://www.mathworks.com/matlabcentral/fileexchange/20689-jensen-shannon-divergence

杰哥 2019-06-20 01:44 宣布谈论
]]>
12bet++博客-杰http://www.fometaux.com/guijie/archive/2019/06/19/216424.html杰哥杰哥Tue, 18 Jun 2019 17:01:00 GMThttp://www.fometaux.com/guijie/archive/2019/06/19/216424.htmlhttp://www.fometaux.com/guijie/comments/216424.htmlhttp://www.fometaux.com/guijie/archive/2019/06/19/216424.html#Feedback0http://www.fometaux.com/guijie/comments/commentRss/216424.htmlhttp://www.fometaux.com/guijie/services/trackbacks/216424.html图画的频率:灰度值改变剧烈程度的目标,是灰度在平面空间上的梯度。

(1)什么是低频?
      低频便是色彩缓慢地改变,也便是灰度缓慢地改变,就代表着那是接连突变的一块区域,这部分便是低频. 关于一幅图画来说,除掉高频的便是低频了,也便是边际以内的内容为低频,而边际内的内容便是图画的大部分信息,即图画的大致概貌和概括,是图画的近似信息。

(2)什么是高频?

     反过来, 高频便是频率改变快.图画中什么时分灰度改变快?便是相邻区域之间灰度相差很大,这便是改变得快.图画中,一个印象与布景的边际部位,一般会有显着的不同,也便是说改变那条边线那里,灰度改变很快,也便是改变频率高的部位.因而,图画边际的灰度值改变快,就对应着频率高,即高频显现图画边际。图画的细节处也是归于灰度值急剧改变的区域,正是因为灰度值的急剧改变,才会呈现细节。
      别的噪声(即噪点)也是这样,在一个像素地点的方位,之所以是噪点,便是因为它与正常的点色彩不相同了,也便是说该像素点灰度值显着不相同了,,也便是灰度有快速地改变了,所以是高频部分,因而有噪声在高频这么一说。

      其实归根结底,是因为咱们人眼辨认物体便是这样的.假设你穿一个红衣服在赤色布景布前摄影,你能很好地辨认么?不能,因为衣服与布景融为一体了,没有改变,所以看不出来,除非有灯火从某解度照在人物身上,这样边际处会呈现高亮和暗影,这样咱们就能看到一些概括线,这些线便是色彩(即灰度)很不相同的当地.
--------------------- 
作者:charlene_bo 
来历:CSDN 
原文:https://blog.csdn.net/charlene_bo/article/details/70877999 
版权声明:本文为博主原创文章,转载请附上博文链接!


杰哥 2019-06-19 01:01 宣布谈论
]]>
12bet++博客-杰http://www.fometaux.com/guijie/archive/2019/06/19/216423.html杰哥杰哥Tue, 18 Jun 2019 16:01:00 GMThttp://www.fometaux.com/guijie/archive/2019/06/19/216423.htmlhttp://www.fometaux.com/guijie/comments/216423.htmlhttp://www.fometaux.com/guijie/archive/2019/06/19/216423.html#Feedback0http://www.fometaux.com/guijie/comments/commentRss/216423.htmlhttp://www.fometaux.com/guijie/services/trackbacks/216423.htmlhttps://www.zhihu.com/question/265609875

杰哥 2019-06-19 00:01 宣布谈论
]]>
12bet++博客-杰http://www.fometaux.com/guijie/archive/2019/05/29/216382.html杰哥杰哥Wed, 29 May 2019 01:33:00 GMThttp://www.fometaux.com/guijie/archive/2019/05/29/216382.htmlhttp://www.fometaux.com/guijie/comments/216382.htmlhttp://www.fometaux.com/guijie/archive/2019/05/29/216382.html#Feedback0http://www.fometaux.com/guijie/comments/commentRss/216382.htmlhttp://www.fometaux.com/guijie/services/trackbacks/216382.htmlhttps://zhidao.baidu.com/question/98221541.html

 

我用的公式修正器是MathType,进入公式修正器后点菜单“修正”,里边有指令“刺进符号(S)”,进去将检查“字体”改为“Euclid Math One”,里边就有花写字母A,B,C,挑选点击“嵌入”按钮就能够了。不清楚用“MicroSoft 公式 3.0”是不是相同。仅供参阅。



杰哥 2019-05-29 09:33 宣布谈论
]]>
12bet++博客-杰http://www.fometaux.com/guijie/archive/2019/05/17/216377.html杰哥杰哥Fri, 17 May 2019 14:59:00 GMThttp://www.fometaux.com/guijie/archive/2019/05/17/216377.htmlhttp://www.fometaux.com/guijie/comments/216377.htmlhttp://www.fometaux.com/guijie/archive/2019/05/17/216377.html#Feedback0http://www.fometaux.com/guijie/comments/commentRss/216377.htmlhttp://www.fometaux.com/guijie/services/trackbacks/216377.htmlhttps://m.sohu.com/a/245182179_473283/?pvid=000115_3w_a 

来历:
scholar.google.com

作者:闻菲

【新智元导读】谷歌学术昨日宣布了2018年最新的学术期刊和会议影响力排名,CVPRNIPS别离排名第20和第54。在排名榜首的Nature里,曩昔5年被引证次数最高的论文,正是深度学习三大神HintonLeCunBengio写的《深度学习》一文,而CVPR里被引次数最高的,则是ResNet,引证次数超过了1万次。

昨日,谷歌学术(Google Scholar)发布了2018年最新的学术期刊/会议影响力排名,从归纳范畴看,毫不意外的,Nature榜首、Science第三,但值得重视的是,计算机视觉顶会CVPR排名第20,另一个AI范畴的顶会NIPS也排名第54名次较去年有了大幅提高。

就连排名榜首的Nature里,曩昔5年被引证次数最高的论文,也是深度学习三大神”HintonLeCunBengio合著的《深度学习》一文。

不只如此,在CVPR里,曩昔5年被引次数最多的论文,是其时还在微软亚洲研讨院的孙剑、何恺明、张祥雨、任少卿写的的ResNet,被引次数现已过万。

2018 谷歌学术期刊和会议影响力排名:CVPR20NIPS54

首要来看归纳范畴成果。

咱们比较关心的NatureScience别离位列榜首和第三,医学闻名期刊《新英格兰杂志》和《柳叶刀》别离坐落第二和第四。一贯被国内与NatureScience并排,有“CNS”之称的Cell,这次排名第6

接下来便是新智元的读者更为重视的与人工智能有关的期刊和会议了,这一次,计算机视觉顶会CVPR不负众望排名第20,由此计算机范畴顶会也总算进入Top20的队伍。

 

另一方面,AI范畴另一个备受重视的会议NIPS,也在归纳排名中位列第54,取得了不错的成果。

与神经科学相关的 Nature Neuroscience 排名第44

 

至于第21名到第40名的期刊,实践上也有常有跟AI相关的论文宣布,咱们也能够看一下排名。

值得一提,PLoS ONE坐落第23Scientific Reports 排名第39,也算是不错的宣布场所了。

 

在第61到第80名中心,会集呈现了多本IEEE期刊。被誉为另一个计算机视觉顶会的ICCV,排名第78

 

81到第100名的期刊/会议排名如下,TPAMI 坐落第92,公然好论文都优先去会议宣布了。

 

工程与计算机范畴Top 20CVPR排名第5

 

谷歌学术计量排名办法:曩昔5年被引证论文“h5指数

谷歌学术(Google Scholar)期刊和会议排名首要依据h-index实践上,从2012年起来,谷歌学术计量(Google Scholar Metrics, GSM)每年都会发布学术期刊和会议的GSM排名。

比较科睿唯安依据Web of Science数据库发布的《期刊引证陈述》(Journal Citation Report, JCR),GSM不只能够免费检索,并且录入的期刊和会议规模远远大于Web of Science

还有一点,期刊/会议的“h5指数(曩昔5h-index比较难以被人为控制,不会因为多了一篇超高被引论文而显着增加,另一方面,故意削减发文量也不会对提高h5指数有效果。

因而,h5指数能够表现期刊和会议的全体归纳实力,逐步成为学术出版物和会议影响力点评的一个重要参阅。

整体看,GSM首要参阅以下3个目标:

相应地,h5指数(h5-index)、h5中心(h5-core)和h5中值(h5-median),便是录入在谷歌学术体系中的期刊和会议在最近5年的论文数量及各论文被引证的次数

例如,假如某本期刊在曩昔5年所宣布的论文中,至少有 h 篇论文别离被引证了至少 h 次,那么这份杂志的 h5指数便是 hh5中心和h5中值的计算办法也相同。

了解更多:

https://scholar.google.com/citations?view_op=top_venues&hl=zh-CN&vq=en

开售!

http://www.aiworld2018.com/

谷歌计算机

声明:该文观念仅代表作者自己,搜狐号系信息发布渠道,搜狐仅供给信息存储空间服务。



杰哥 2019-05-17 22:59 宣布谈论
]]>
12bet++博客-杰http://www.fometaux.com/guijie/archive/2019/04/07/216341.html杰哥杰哥Sun, 07 Apr 2019 00:49:00 GMThttp://www.fometaux.com/guijie/archive/2019/04/07/216341.htmlhttp://www.fometaux.com/guijie/comments/216341.htmlhttp://www.fometaux.com/guijie/archive/2019/04/07/216341.html#Feedback0http://www.fometaux.com/guijie/comments/commentRss/216341.htmlhttp://www.fometaux.com/guijie/services/trackbacks/216341.html
依据20190405 flagged email,上传两个paper到arxiv. 先压缩成zip文件,再Upload file, 发现会提示如下过错:
contained a.bib file, but no a.bbl file (include a.bbl, or submit without a.bib; and remember to verify references).
在上传的文件中,将a.bib删去即可,不要在原始文件中删,这样或许会弄丢文件,因为有时仍是要在本机编译的。
https://arxiv.org/help/submit#availability


[zz] arxiv上传latex文章的办法与坑
假如想在arxiv上挂出文章,一般能够挂pdf与latex两种格局的,假如pdf是由latex生成的话,一般只能上传latex源文件,不支持pdf的上传。
arxiv上上传latex首要包含以下几个部分,特别上传文件线上编译一步具有一些坑需求留意。
榜首步:注册账号,填写校园后缀邮箱,免除或许的上传权限审阅;
第二至六步:填写一些根本信息与设置,参阅网上的图:
新建提交:
填写信息
第七步,上传文件,很重要,涉及到是否能编译成功。
咱们离线电脑上的latex编译完的文件夹一般是这样的:
里边一大堆东西,重要的便是3个,一个源文件:.tex的文件,一个是与文件名同名的bbl文件:.bbl文件,还有便是文章中运用到的各种图片(包含jpg,pdf等各类图画文件)。
别的一点值得留意,各类图片文件不能以文件夹方式上传,只能一个文件一个文件的传,比方上图需求将figures文件夹的图片一个个上传,假如里边还有文件夹,持续翻开上传,传完后如下图所示:
需求留意,此刻线上编译体系在编译.tex文件的时分,.tex里边索引图片的途径都是需求在最外层途径的,因为图片便是放在最外层的,可是线下.tex里边的图片索引为了便利,一般都有好几次目录,比方上图,.tex里边用到图片的途径一般至少要加上"figures/xxx.jpg",假如途径不修正直接上传的线下.tex文件,线上编译则找不到figures文件夹而编译不了,所以针对此,需求将线下.tex里边的一切涉及到图片索引方位的当地悉数改为一级索引,即直接索引图片姓名,比方xxx.jpg。
比照一下,离线的.tex文件或许如图:
到了线上因为没有figures文件夹,所以对应的图片目录有必要去掉,改成下面这样:
针对原始.tex中存在的一切索引图片,都需求改成一级索引线上才干编译经过。
编译成功后,填一下根本输出音讯,比方title,author,abstract,comments等等,submit即可,submit之前能够预览生成的pdf,假如线上编译完的格局契合预期即可宣布。
--------------------- 
作者:我i智能 
来历:CSDN 
原文:https://blog.csdn.net/on2way/article/details/85940768 
版权声明:本文为博主原创文章,转载请附上博文链接!


杰哥 2019-04-07 08:49 宣布谈论
]]>
12bet++博客-杰http://www.fometaux.com/guijie/archive/2019/04/02/216325.html杰哥杰哥Mon, 01 Apr 2019 21:42:00 GMThttp://www.fometaux.com/guijie/archive/2019/04/02/216325.htmlhttp://www.fometaux.com/guijie/comments/216325.htmlhttp://www.fometaux.com/guijie/archive/2019/04/02/216325.html#Feedback0http://www.fometaux.com/guijie/comments/commentRss/216325.htmlhttp://www.fometaux.com/guijie/services/trackbacks/216325.htmlWhile research in Generative Adversarial Networks (GANs) continues to improve the fundamental stability of these models, we use a bunch of tricks to train them and make them stable day to day.

Here are a summary of some of the tricks.

Here's a link to the authors of this document

If you find a trick that is particularly useful in practice, please open a Pull Request to add it to the document. If we find it to be reasonable and verified, we will merge it in.

1. Normalize the inputs

  • normalize the images between -1 and 1
  • Tanh as the last layer of the generator output

2: A modified loss function

In GAN papers, the loss function to optimize G is min (log 1-D), but in practice folks practically use max log D

  • because the first formulation has vanishing gradients early on
  • Goodfellow et. al (2014)

In practice, works well:

  • Flip labels when training generator: real = fake, fake = real

3: Use a spherical Z

  • Dont sample from a Uniform distribution

cube

  • Sample from a gaussian distribution

sphere

4: BatchNorm

  • Construct different mini-batches for real and fake, i.e. each mini-batch needs to contain only all real images or all generated images.
  • when batchnorm is not an option use instance normalization (for each sample, subtract mean and divide by standard deviation).

batchmix

5: Avoid Sparse Gradients: ReLU, MaxPool

  • the stability of the GAN game suffers if you have sparse gradients
  • LeakyReLU = good (in both G and D)
  • For Downsampling, use: Average Pooling, Conv2d + stride
  • For Upsampling, use: PixelShuffle, ConvTranspose2d + stride

6: Use Soft and Noisy Labels

  • Label Smoothing, i.e. if you have two target labels: Real=1 and Fake=0, then for each incoming sample, if it is real, then replace the label with a random number between 0.7 and 1.2, and if it is a fake sample, replace it with 0.0 and 0.3 (for example).
    • Salimans et. al. 2016
  • make the labels the noisy for the discriminator: occasionally flip the labels when training the discriminator

7: DCGAN / Hybrid Models

  • Use DCGAN when you can. It works!
  • if you cant use DCGANs and no model is stable, use a hybrid model : KL + GAN or VAE + GAN

8: Use stability tricks from RL

  • Experience Replay
    • Keep a replay buffer of past generations and occassionally show them
    • Keep checkpoints from the past of G and D and occassionaly swap them out for a few iterations
  • All stability tricks that work for deep deterministic policy gradients
  • See Pfau & Vinyals (2016)

9: Use the ADAM Optimizer

  • optim.Adam rules!
    • See Radford et. al. 2015
  • Use SGD for discriminator and ADAM for generator

10: Track failures early

  • D loss goes to 0: failure mode
  • check norms of gradients: if they are over 100 things are screwing up
  • when things are working, D loss has low variance and goes down over time vs having huge variance and spiking
  • if loss of generator steadily decreases, then it's fooling D with garbage (says martin)

11: Dont balance loss via statistics (unless you have a good reason to)

  • Dont try to find a (number of G / number of D) schedule to uncollapse training
  • It's hard and we've all tried it.
  • If you do try it, have a principled approach to it, rather than intuition

For example

while lossD > A:   train D while lossG > B:   train G 

12: If you have labels, use them

  • if you have labels available, training the discriminator to also classify the samples: auxillary GANs

13: Add noise to inputs, decay over time

14: [notsure] Train discriminator more (sometimes)

  • especially when you have noise
  • hard to find a schedule of number of D iterations vs G iterations

15: [notsure] Batch Discrimination

  • Mixed results

16: Discrete variables in Conditional GANs

  • Use an Embedding layer
  • Add as additional channels to images
  • Keep embedding dimensionality low and upsample to match image channel size

17: Use Dropouts in G in both train and test phase

Authors

  • Soumith Chintala
  • Emily Denton
  • Martin Arjovsky
  • Michael Mathieu
Reference:
https://github.com/soumith/ganhacks#authors



GAN的一些小trick

最近练习GAN遇到了许多坑,GAN的练习的确是个很dt的问题,假如仅仅用他人的paper跑一些运用还好,假如自己规划新的结构,做一些新的研讨的话,就需求了解这些trick了,都是泪~

这个doc soumith/ganhackssoumith/ganhacks 简直是GAN武林界的九阴真经,看完今后感觉自己上了一个level。

自己做个笔记:

1。normalize输入,让它在[-1,1]。generater的输出用tanh,也是[-1,1],这就对应起来了。

2。论文里边optimize G是min log(1 - D),但在实践练习的时分能够用 max log(D)

3。关于噪声z,别用均匀(uniform)散布,用高斯散布。

4。能够用instance norm替代 batch norm。还有便是real放一同,generated放一同(感觉这个是废话QAQ)。

5。防止稀少的gradients:RELU,Maxpool那些。这一点我以为原因是不像做区分式的网络,判别式的,尽或许提取重要的信息,其实一些对猜测影响不大的信息都被疏忽掉了。可是GAN不同,是生成式的模型,所以要尽或许的表现出细节方面的内容,所以防止运用稀少的这些?

  • LeakyRelu
  • For Downsampling, use: Average Pooling, Conv2d + stride
  • For Upsampling, use: PixelShuffle, ConvTranspose2d + stride

6。能够把label为1的(real)变到0.7~1.2,label为0的变到0~0.3。这个能够深化想想。

7。能用DCGAN就用,用不了的话用混合模型,KL+GAN,VAE+GAN之类的。

8。借用RL练习技巧。

  • Keep a replay buffer of past generations and occassionally show them
  • Keep checkpoints from the past of G and D and occassionaly swap them out for a few iterations

9。用ADAM!或者是D能够用SGD,G用ADAM

10。留意练习进程,尽早发现练习失利,不至于练习好长时间最终才发现,浪费时间。

11。最好别测验设置一些常量去balance G与D的练习进程。(他们说这个work很难做。我觉得有时间的话其实仍是能够试一下的。)

12。假如你对real有相应的label,用label,AC-GAN。参加label信息,能够下降生成的难度,这个应该能够想的通。

13。加噪声?效果是improve生成内容得diversity?

  • Add some artificial noise to inputs to D (Arjovsky et. al., Huszar, 2016)
  • adding gaussian noise to every layer of generator (Zhao et. al. EBGAN)

14。【not sure】多练习D,特别是加噪声的时分。

15。【not sure】batch D,感觉貌似是和pix2pix中的patchGAN有点像?

16。CGAN,我一向觉得CGAN这种才契合人类学习的思路。原始的GAN就太粗犷了,就如同什么都不知道,然后两个人D与G谈论沟通对立,发生的都是一些前人没有做过的作业,开篇的作业,所以比较困难一些,可是CGAN的话就有了必定的条件,也便是技能堆集,所以比较简单一些。有点相似科研中的大牛挖坑,拓荒新方向(GAN)。小牛填坑(CGAN)。

17。在G中的几层顶用dropout(50%)。这个有一篇论文,还没看。


读完这些感觉自己想要规划GAN的话,应该有个体系的认识了,不会觉得自己如同有哪些重要的当地还不知道,很不结壮感觉。这种感觉对我这种强迫症的感觉很不爽啊!!看完今后登时舒服了许多~~~

https://zhuanlan.zhihu.com/p/27725664

杰哥 2019-04-02 05:42 宣布谈论
]]>
12bet++博客-杰http://www.fometaux.com/guijie/archive/2019/03/27/216316.html杰哥杰哥Tue, 26 Mar 2019 21:01:00 GMThttp://www.fometaux.com/guijie/archive/2019/03/27/216316.htmlhttp://www.fometaux.com/guijie/comments/216316.htmlhttp://www.fometaux.com/guijie/archive/2019/03/27/216316.html#Feedback0http://www.fometaux.com/guijie/comments/commentRss/216316.htmlhttp://www.fometaux.com/guijie/services/trackbacks/216316.htmlWhat is the meaning of the word logits in TensorFlow?
In the following TensorFlow function, we must feed the activation of artificial neurons in the final layer. That I understand. But I don't understand why it is called logits? Isn't that a mathematical function?
loss_function = tf.nn.softmax_cross_entropy_with_logits(
     logits = last_layer,
     labels = target_output
)

For example, in the last layer of the discriminator of generative adversarial networks (GAN), we will use sigmoid(logits) to get the output of D. This is discussed with Zhengxia.
Reference:
https://stackoverflow.com/questions/41455101/what-is-the-meaning-of-the-word-logits-in-tensorflow


杰哥 2019-03-27 05:01 宣布谈论
]]>
12bet++博客-杰http://www.fometaux.com/guijie/archive/2019/03/20/216303.html杰哥杰哥Tue, 19 Mar 2019 19:08:00 GMThttp://www.fometaux.com/guijie/archive/2019/03/20/216303.htmlhttp://www.fometaux.com/guijie/comments/216303.htmlhttp://www.fometaux.com/guijie/archive/2019/03/20/216303.html#Feedback0http://www.fometaux.com/guijie/comments/commentRss/216303.htmlhttp://www.fometaux.com/guijie/services/trackbacks/216303.htmlFor https://github.com/scutan90/DeepLearning-500-questions/blob/master/ch07_%E7%94%9F%E6%88%90%E5%AF%B9%E6%8A%97%E7%BD%91%E7%BB%9C(GAN)/%E7%AC%AC%E4
%B8%83%E7%AB%A0_%E7%94%9F%E6%88%90%E5%AF%B9%E6%8A%97%E7%BD%91%E7%BB%9C(GAN).md, the formula doesn't show well. I asked Zhengxia, he says that markdown doesn't support formula very well. As he knows markdown doesn't support the in-line formula. He uses markdown, but he doesn't use formula in it. If you have to use formula, you can use latex. If we use Typora, at the lower left corner, select "启用源代码形式". Change 
$$\mathop {\min }\limits_G \mathop {\max }\limits_D V(D,G) = {{\rm E}{x\sim{p{data}}(x)}}[\log D(x)] + {{\rm E}_{z\sim{p_z}(z)}}[\log (1 - D(G(z)))]$$
to make both $$ in a separate line. Select "退出源代码形式" and press F5 which means 改写. The formula will show correctly.
Google search "online markdown". For example, if we use https://dillinger.io/, copy the code from Typora to https://dillinger.io/. We will find that if we want to see the formula, we have to use in this way: $$\mathop {\min }\limits_G \mathop {\max }\limits_D V(D,G) = {{\rm E}{x\sim{p{data}}(x)}}[\log D(x)] + {{\rm E}_{z\sim{p_z}(z)}}[\log (1 - D(G(z)))]$$


杰哥 2019-03-20 03:08 宣布谈论
]]>
12bet++博客-杰http://www.fometaux.com/guijie/archive/2019/03/17/216300.html杰哥杰哥Sun, 17 Mar 2019 15:24:00 GMThttp://www.fometaux.com/guijie/archive/2019/03/17/216300.htmlhttp://www.fometaux.com/guijie/comments/216300.htmlhttp://www.fometaux.com/guijie/archive/2019/03/17/216300.html#Feedback0http://www.fometaux.com/guijie/comments/commentRss/216300.htmlhttp://www.fometaux.com/guijie/services/trackbacks/216300.htmlgoogle scholar, 点进paper,cited by的number => 勾上search within citing articles, search box里输入 -author:"your name" 
Reference: http://www.unknownspace.org/article_t/Immigration/33820191.html

杰哥 2019-03-17 23:24 宣布谈论
]]>