事務員@新潟県厚生連→医療情報技師@魚沼基幹病院→機械学習エンジニア@IQVIAソリューションズジャパン→データサイエンティスト@中外製薬→ソリューションズアーキテクト@SAS Institute Japan

RNN or from front to back time series reading

The structure of RNN shows that the previous node connects to the current node. This connection is drawn in 2-dimension such that the previous node sits in the left side and the current node sits in the right side. f:id:HealthcareIT_interpreter:20190216122258p:plain However, when you see the input tensor, the structure of it is 3-dimension such that the previous input sits in the front and the current input sits in the back.

In [1]: tensor                                                                  
array([[[3, 7],
        [7, 0],
        [2, 0],
        [3, 9],
        [1, 2]],

       [[4, 4],
        [6, 0],
        [2, 4],
        [3, 4],
        [3, 0]]])

In [2]: tensor.shape                                                            
Out[2]: (2, 5, 2)

So if you revisit the RNN structure as the same as that of the tensor (3-dimension), you can draw the previous node in the front, whereas the current one in the back. This gives me the new insight that when you read a sentence you can imagine that the words stream in aligning with the front-back line, which is different from streaming in aligning with the left-right line as shown in your text. This insight may improve one's reading ability. For human's brain, the movement from left to right (or from right to left as well as from up to down) is not natural as human walks from back to front. Having this insight, I tried to read the novel and checked how my reading fluency improved.


By imagining the words stream from back to front, the kind of obsession that you have to move your eye focus from left to right has been alleviated. This alleviation makes me concentrate on just grasping the meaning of the sentence. I can name this reading method as "RNN reading". I want to know how a person with dyslexia feels when he/she tries reading using this method.

How can you trim the file size of ipynb which is so big that your kernel cannot open it?

I was working on multi-variate regression analysis. There were over 80 explanatory variables so I used the AIC (Akaike information criterion) with step function in order to reduce these. BTW, the AIC step function does not exist in Python so you have to write yourself.

qiita.com This guy wrote his own step function. I copied and pasted it in my ipynb.

def step_aic(model, exog, endog, **kwargs):
    This select the best exogenous variables with AIC
    Both exog and endog values can be either str or list.
    (Endog list is for the Binomial family.)

    Note: This adopt only "forward" selection

        model: model from statsmodels.formula.api
        exog (str or list): exogenous variables
        endog (str or list): endogenous variables
        kwargs: extra keyword argments for model (e.g., data, family)

        model: a model that seems to have the smallest AIC

    # convert exog, endog to list format
    exog = np.r_[[exog]].flatten()
    endog = np.r_[[endog]].flatten()
    remaining = set(exog)
    selected = []  # contains adopted candidates

    # calculate AIC only for constants
    formula_head = ' + '.join(endog) + ' ~ '
    formula = formula_head + '1'
    aic = model(formula=formula, **kwargs).fit().aic
    print('AIC: {}, formula: {}'.format(round(aic, 3), formula))

    current_score, best_new_score = np.ones(2) * aic

    # adopt all elements, or ends the loop if the AIC will not be improved although adding any elements
    while remaining and current_score == best_new_score:
        scores_with_candidates = []
        for candidate in remaining:

            # calculate the AIC when adding the remained elements one by one
            formula_tail = ' + '.join(selected + [candidate])
            formula = formula_head + formula_tail
            aic = model(formula=formula, **kwargs).fit().aic
            print('AIC: {}, formula: {}'.format(round(aic, 3), formula))

            scores_with_candidates.append((aic, candidate))

        # adopt the elements that improved the AIC most as the best candidate 
        best_new_score, best_candidate = scores_with_candidates.pop()

        # if adding a candinate reduces the AIC, add it as the determined candidates 
        if best_new_score < current_score:
            current_score = best_new_score

    formula = formula_head + ' + '.join(selected)
    print('The best formula: {}'.format(formula))
    return model(formula, **kwargs).fit()

Here is the problem. "print('AIC: {}, formula: {}'.format(round(aic, 3), formula))" yeilds huge amount of text information on my notebook, which makes my file as big as 80 MB. Have you ever heard of 80 MB sized ipynb? Jupyter notebook cannot handle it and freezed. To solve this problem, you have to trim your ipynb. But how? Your local jupyter kernel cannot open it. I tried once to delete unnecessary part manually by opening ipynb in my editor (as a JSON file) but that forced me huge efforts.


My idea was to use Google Colab notebook. Colab can handle and open a big size file.

You can use Google Colab Notebooks for trimming outputs. I opened 83 MB ipynb file at Colab and Colab could handle it. From Colab GUI, you can choose the output cell you want to delete then get the file back to your local directory and reopen it. Eventually I trimmed the original file to the size of 2 MB in this way.



現在の取得資格: 医療情報技師(2016年8月試験合格), 診療情報管理士(2018年2月試験合格、88期生)






from selenium import webdriver
from selenium.webdriver.common.keys import Keys as keys
import time

def main():
    url = "https://www.jha-e.com/top/certExams/resultDetails"
    driver = webdriver.PhantomJS()

if __name__ == '__main__':






Microsoft Translator Text API のcategoryを"generalnn"にすると翻訳精度が飛躍的に向上する

Microsoft Translator Text APIを自作のWebアプリに組み入れて翻訳させても翻訳精度が悪くて使い物にならない、というかiOSアプリや Androidアプリと翻訳結果が違っている、という状況に陥っていました。これについて日本マイクロソフトの中の人に直接問い合わせておりましたが、先日日本マイクロソフトの某パートナー企業の人からcategory="generalnn"にすると解消するという貴重な情報をいただき、その通りにすると改善しました。何も指定しない場合は、デフォルトのcategory="general"になるそうです。ちなみに"generalnn"の最後の"nn"というのはニューラル・ネットワークの事でしょうか? こちらに良い記事がありました。

Microsoft Translator launching Neural Network based translations for all its speech languages – Translator




Microsoft Translator Text APIをPythonから手軽に使えるように実装した

Githubにあげました。 github.com

同様のライブラリはたくさん上がっていますが、Addtranslationメソッドをまともに使えるものが少なかったので自分で実装しました。 (Addtranslationメソッドとは、こういう風に訳してほしいという例文を送信して、次回以降の翻訳の精度を向上させる(カスタマイズする)機能です。)