⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 play_game.m

📁 Implementation to linear, quadratic and logistic discriminant analysis, for examples
💻 M
字号:
function reward_out = play_game(choice_vector, method)

% [REWARD_OUT] = generate_histories(CHOICE_VECTOR, METHOD)
%
% Replicates the reward vector generated by the vector of actions in
% CHOICE_VECTOR when played on method METHOD.
% 
% Required:
%   CHOICE_VECTOR - The vector of choices given by the user (e.g. c1(:,1)')
%
%   METHOD - Must be either 1, 2, or 3.  Will specify the form of the reward
%   schedule presented under this trial
% 
% The output argument is a vector of rewards in REWARD_OUT.
%
% Examples:
%   For the examples described in our project
% 
% >> reward_hist = play_game(c1'(:, 1), 1);

switch method
    case 1
        reward_vector(1,:) = [linspace(.7, .2, 240) linspace(0.24, 0.8, 160)];
        reward_vector(2,:) = [linspace(0.6,0.2, 160) linspace(0.24, .5, 80) 0.2*ones(1, 160)];
    case 2
        reward_vector(2,:) = [linspace(.7, .2, 260) linspace(0.24, 0.8, 140)];
        reward_vector(1,:) = [linspace(0.6,0.2, 140) linspace(0.24, .5, 120) 0.2*ones(1, 140)];
    case 3
        reward_vector(1,:) = [linspace(.7, .2, 280) linspace(0.24, 0.8, 120)];
        reward_vector(2,:) = [linspace(0.6,0.2, 120) linspace(0.24, .5, 160) 0.2*ones(1, 120)];
end

xhist = get_xhist(choice_vector);
reward_out = zeros(size(xhist));
choiceB = find(choice_vector==0);
choiceA = find(choice_vector==1);
reward_out(choiceA) = reward_vector(1, median([ones(size(choiceA)); round(400*xhist(choiceA));  400*ones(size(choiceA))], 1));
reward_out(choiceB) = reward_vector(2, median([ones(size(choiceB)); round(400*xhist(choiceB));  400*ones(size(choiceB))], 1));

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -