{ "nbformat_minor": 0, "nbformat": 4, "cells": [ { "execution_count": null, "cell_type": "code", "source": [ "%matplotlib inline" ], "outputs": [], "metadata": { "collapsed": false } }, { "source": [ "\n\n# Visualize Evoked data\n\n\n" ], "cell_type": "markdown", "metadata": {} }, { "execution_count": null, "cell_type": "code", "source": [ "import os.path as op\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne" ], "outputs": [], "metadata": { "collapsed": false } }, { "source": [ "In this tutorial we focus on plotting functions of :class:`mne.Evoked`.\nFirst we read the evoked object from a file. Check out\n`tut_epoching_and_averaging` to get to this stage from raw data.\n\n" ], "cell_type": "markdown", "metadata": {} }, { "execution_count": null, "cell_type": "code", "source": [ "data_path = mne.datasets.sample.data_path()\nfname = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')\nevoked = mne.read_evokeds(fname, baseline=(None, 0), proj=True)\nprint(evoked)" ], "outputs": [], "metadata": { "collapsed": false } }, { "source": [ "Notice that ``evoked`` is a list of :class:`evoked ` instances.\nYou can read only one of the categories by passing the argument ``condition``\nto :func:`mne.read_evokeds`. To make things more simple for this tutorial, we\nread each instance to a variable.\n\n" ], "cell_type": "markdown", "metadata": {} }, { "execution_count": null, "cell_type": "code", "source": [ "evoked_l_aud = evoked[0]\nevoked_r_aud = evoked[1]\nevoked_l_vis = evoked[2]\nevoked_r_vis = evoked[3]" ], "outputs": [], "metadata": { "collapsed": false } }, { "source": [ "Let's start with a simple one. We plot event related potentials / fields\n(ERP/ERF). The bad channels are not plotted by default. Here we explicitly\nset the ``exclude`` parameter to show the bad channels in red. All plotting\nfunctions of MNE-python return a handle to the figure instance. When we have\nthe handle, we can customise the plots to our liking.\n\n" ], "cell_type": "markdown", "metadata": {} }, { "execution_count": null, "cell_type": "code", "source": [ "fig = evoked_l_aud.plot(exclude=())" ], "outputs": [], "metadata": { "collapsed": false } }, { "source": [ "All plotting functions of MNE-python return a handle to the figure instance.\nWhen we have the handle, we can customise the plots to our liking. For\nexample, we can get rid of the empty space with a simple function call.\n\n" ], "cell_type": "markdown", "metadata": {} }, { "execution_count": null, "cell_type": "code", "source": [ "fig.tight_layout()" ], "outputs": [], "metadata": { "collapsed": false } }, { "source": [ "Now let's make it a bit fancier and only use MEG channels. Many of the\nMNE-functions include a ``picks`` parameter to include a selection of\nchannels. ``picks`` is simply a list of channel indices that you can easily\nconstruct with :func:`mne.pick_types`. See also :func:`mne.pick_channels` and\n:func:`mne.pick_channels_regexp`.\nUsing ``spatial_colors=True``, the individual channel lines are color coded\nto show the sensor positions - specifically, the x, y, and z locations of\nthe sensors are transformed into R, G and B values.\n\n" ], "cell_type": "markdown", "metadata": {} }, { "execution_count": null, "cell_type": "code", "source": [ "picks = mne.pick_types(evoked_l_aud.info, meg=True, eeg=False, eog=False)\nevoked_l_aud.plot(spatial_colors=True, gfp=True, picks=picks)" ], "outputs": [], "metadata": { "collapsed": false } }, { "source": [ "Notice the legend on the left. The colors would suggest that there may be two\nseparate sources for the signals. This wasn't obvious from the first figure.\nTry painting the slopes with left mouse button. It should open a new window\nwith topomaps (scalp plots) of the average over the painted area. There is\nalso a function for drawing topomaps separately.\n\n" ], "cell_type": "markdown", "metadata": {} }, { "execution_count": null, "cell_type": "code", "source": [ "evoked_l_aud.plot_topomap()" ], "outputs": [], "metadata": { "collapsed": false } }, { "source": [ "By default the topomaps are drawn from evenly spread out points of time over\nthe evoked data. We can also define the times ourselves.\n\n" ], "cell_type": "markdown", "metadata": {} }, { "execution_count": null, "cell_type": "code", "source": [ "times = np.arange(0.05, 0.151, 0.05)\nevoked_r_aud.plot_topomap(times=times, ch_type='mag')" ], "outputs": [], "metadata": { "collapsed": false } }, { "source": [ "Or we can automatically select the peaks.\n\n" ], "cell_type": "markdown", "metadata": {} }, { "execution_count": null, "cell_type": "code", "source": [ "evoked_r_aud.plot_topomap(times='peaks', ch_type='mag')" ], "outputs": [], "metadata": { "collapsed": false } }, { "source": [ "You can take a look at the documentation of :func:`mne.Evoked.plot_topomap`\nor simply write ``evoked_r_aud.plot_topomap?`` in your python console to\nsee the different parameters you can pass to this function. Most of the\nplotting functions also accept ``axes`` parameter. With that, you can\ncustomise your plots even further. First we create a set of matplotlib\naxes in a single figure and plot all of our evoked categories next to each\nother.\n\n" ], "cell_type": "markdown", "metadata": {} }, { "execution_count": null, "cell_type": "code", "source": [ "fig, ax = plt.subplots(1, 5)\nevoked_l_aud.plot_topomap(times=0.1, axes=ax[0], show=False)\nevoked_r_aud.plot_topomap(times=0.1, axes=ax[1], show=False)\nevoked_l_vis.plot_topomap(times=0.1, axes=ax[2], show=False)\nevoked_r_vis.plot_topomap(times=0.1, axes=ax[3], show=True)" ], "outputs": [], "metadata": { "collapsed": false } }, { "source": [ "Notice that we created five axes, but had only four categories. The fifth\naxes was used for drawing the colorbar. You must provide room for it when you\ncreate this kind of custom plots or turn the colorbar off with\n``colorbar=False``. That's what the warnings are trying to tell you. Also, we\nused ``show=False`` for the three first function calls. This prevents the\nshowing of the figure prematurely. The behavior depends on the mode you are\nusing for your python session. See http://matplotlib.org/users/shell.html for\nmore information.\n\nWe can combine the two kinds of plots in one figure using the\n:func:`mne.Evoked.plot_joint` method of Evoked objects. Called as-is\n(``evoked.plot_joint()``), this function should give an informative display\nof spatio-temporal dynamics.\nYou can directly style the time series part and the topomap part of the plot\nusing the ``topomap_args`` and ``ts_args`` parameters. You can pass key-value\npairs as a python dictionary. These are then passed as parameters to the\ntopomaps (:func:`mne.Evoked.plot_topomap`) and time series\n(:func:`mne.Evoked.plot`) of the joint plot.\nFor an example of specific styling using these ``topomap_args`` and\n``ts_args`` arguments, here, topomaps at specific time points\n(70 and 105 msec) are shown, sensors are not plotted (via an argument\nforwarded to `plot_topomap`), and the Global Field Power is shown:\n\n" ], "cell_type": "markdown", "metadata": {} }, { "execution_count": null, "cell_type": "code", "source": [ "ts_args = dict(gfp=True)\ntopomap_args = dict(sensors=False)\nevoked_r_aud.plot_joint(title='right auditory', times=[.07, .105],\n ts_args=ts_args, topomap_args=topomap_args)" ], "outputs": [], "metadata": { "collapsed": false } }, { "source": [ "Sometimes, you may want to compare two or more conditions at a selection of\nsensors, or e.g. for the Global Field Power. For this, you can use the\nfunction :func:`mne.viz.plot_compare_evokeds`. The easiest way is to create\na Python dictionary, where the keys are condition names and the values are\n:class:`mne.Evoked` objects. If you provide lists of :class:`mne.Evoked`\nobjects, such as those for multiple subjects, the grand average is plotted,\nalong with a confidence interval band - this can be used to contrast\nconditions for a whole experiment.\nFirst, we load in the evoked objects into a dictionary, setting the keys to\n'/'-separated tags (as we can do with event_ids for epochs). Then, we plot\nwith :func:`mne.viz.plot_compare_evokeds`.\nThe plot is styled with dictionary arguments, again using \"/\"-separated tags.\nWe plot a MEG channel with a strong auditory response.\n\n" ], "cell_type": "markdown", "metadata": {} }, { "execution_count": null, "cell_type": "code", "source": [ "conditions = [\"Left Auditory\", \"Right Auditory\", \"Left visual\", \"Right visual\"]\nevoked_dict = dict()\nfor condition in conditions:\n evoked_dict[condition.replace(\" \", \"/\")] = mne.read_evokeds(\n fname, baseline=(None, 0), proj=True, condition=condition)\nprint(evoked_dict)\n\ncolors = dict(Left=\"Crimson\", Right=\"CornFlowerBlue\")\nlinestyles = dict(Auditory='-', visual='--')\npick = evoked_dict[\"Left/Auditory\"].ch_names.index('MEG 1811')\n\nmne.viz.plot_compare_evokeds(evoked_dict, picks=pick,\n colors=colors, linestyles=linestyles)" ], "outputs": [], "metadata": { "collapsed": false } }, { "source": [ "We can also plot the activations as images. The time runs along the x-axis\nand the channels along the y-axis. The amplitudes are color coded so that\nthe amplitudes from negative to positive translates to shift from blue to\nred. White means zero amplitude. You can use the ``cmap`` parameter to define\nthe color map yourself. The accepted values include all matplotlib colormaps.\n\n" ], "cell_type": "markdown", "metadata": {} }, { "execution_count": null, "cell_type": "code", "source": [ "evoked_r_aud.plot_image(picks=picks)" ], "outputs": [], "metadata": { "collapsed": false } }, { "source": [ "Finally we plot the sensor data as a topographical view. In the simple case\nwe plot only left auditory responses, and then we plot them all in the same\nfigure for comparison. Click on the individual plots to open them bigger.\n\n" ], "cell_type": "markdown", "metadata": {} }, { "execution_count": null, "cell_type": "code", "source": [ "title = 'MNE sample data (condition : %s)'\nevoked_l_aud.plot_topo(title=title % evoked_l_aud.comment)\ncolors = 'yellow', 'green', 'red', 'blue'\nmne.viz.plot_evoked_topo(evoked, color=colors,\n title=title % 'Left/Right Auditory/Visual')" ], "outputs": [], "metadata": { "collapsed": false } }, { "source": [ "Visualizing field lines in 3D\n-----------------------------\nWe now compute the field maps to project MEG and EEG data to MEG helmet\nand scalp surface.\n\nTo do this we'll need coregistration information. See\n`tut_forward` for more details.\n\nHere we just illustrate usage.\n\n" ], "cell_type": "markdown", "metadata": {} }, { "execution_count": null, "cell_type": "code", "source": [ "subjects_dir = data_path + '/subjects'\ntrans_fname = data_path + '/MEG/sample/sample_audvis_raw-trans.fif'\n\nmaps = mne.make_field_map(evoked_l_aud, trans=trans_fname, subject='sample',\n subjects_dir=subjects_dir, n_jobs=1)\n\n# explore several points in time\nfield_map = evoked_l_aud.plot_field(maps, time=.1)" ], "outputs": [], "metadata": { "collapsed": false } }, { "source": [ "

Note

If trans_fname is set to None then only MEG estimates can be visualized.

\n\n" ], "cell_type": "markdown", "metadata": {} } ], "metadata": { "kernelspec": { "display_name": "Python 2", "name": "python2", "language": "python" }, "language_info": { "mimetype": "text/x-python", "nbconvert_exporter": "python", "name": "python", "file_extension": ".py", "version": "2.7.13", "pygments_lexer": "ipython2", "codemirror_mode": { "version": 2, "name": "ipython" } } } }