第一范文网 - 专业文章范例文档资料分享平台

Discovering the hidden structure of complex dynamic systems(12)

来源:用户分享 时间:2021-06-02 本文由望着你我 分享 下载这篇文档 手机版
说明:文章内容仅供预览,部分内容可能不全,需要完整文档或者需要复制内容,请下载word后使用。下载word有问题请添加微信号:xxxxxx或QQ:xxxxxx 处理(尽可能给您提供完整文档),感谢您的支持与谅解。

Dynamic Bayesian networks provide a compact and natural representation for complex dynamic systems. However, in many cases, there is no expert available from whom a model can be elicited. Learning provides an alternative approach for constructing models of

family we change. How do we choose which ESS we should compute? The rst approach is to compute in advance all the expected su cient statistics the search might need. However, since there are too many of these, this solution is

impractical.2 The second approach, which was used by Friedman et al. 1998], is to compute sufcient statistics\on demand", i.e., statistics for Y are computed only when the search needs to evaluate a structure with this family.3 Unfortunately, that also is typically quite expensive, as it requires a traversal over the entire training sequence. These two solutions are at the extreme ends of a spectrum. Friedman et al. 1999] present an intermediate solution which we also adopt. The search procedure works in stages. At the beginning of each stage the search procedure posts the statistics it will require for that stage. These are selected in an informed way, based on the current state of the search. The requested statistics are then computed in one batch, using a single inference phase for all of them at once. More specifically, the algorithm nds for each variable X 0 a set Pot0 of potential parents, based on the current network structure. At each stage, the search is restricted to consider only operations that involve adding edges Y ! X 0 for Y 2 Pot0 or removing current arcs. The number of ESS required for these operations is fairly small, and can be collected at once. The algorithm uses heuristics to focus the attention of the search procedure on\plausible" potential parents. The algorithm therefore requires relatively few statistics in each stage. After this restricted search is done, the process repeats, using the new network as a basis for nding new potential parents for each variable. This process is iterated until convergence of the scoring function.i i i i

5.2 Discovering Hidden VariablesAs we mentioned in the introduction, a fundamental problem when learning dynamic systems from real data is the discovery of hidden variables. In stock market data, for example, internet stocks are typically correlated. Unless we realize that this correlation is due to a hidden variable| public perception of the future growth of internet revenue| the model we learn is likely to be quite poor. The task of discovering new hidden variables is a noto2 In general, there are exponential number of statistics. When we restrict the indegree of each variable, the number is polynomial, but still unrealistically large. 3 Of course, we want to avoid unnecessary recomputations. The standard solution is to store a cache of computed statistics, and call the ESS computations on statistics that are not found in the cache.

riously di cult one. In temporal sequences, however, we have some cues that can indicate the presence of such variables. In particular, ignoring a hidden variable will often lead to a non-Markovian correlation, induced by the loss of information as the hidden variable is\forgotten" from step to step. Thus, we can search for non-Markovian correlations, and use them as an indication that the process needs additional\memory" about the past. More precisely, suppose that we discover that we can predict X (+1) using X ( ) and Y (?1) . Then we might cons

ider creating a new hidden variable H such that H is a parent of X 0 and Y is a parent of H 0 . Thus, we will have that Y (?1) in uences H ( ) via the Y ! H 0 edge, and that H ( ) in turn in uences X (+1) via the H ! X 0 edge. In other words, H behaves as the\memory" of Y with one step of lag. In general, we propose the following algorithm. We start by learning the edges among variables in k time slices (a k-TBN) for some xed time window k. When some of the variables are unobserved, we use structural EM and our approximation methods to estimate the ESS of the variables in these k consecutive time slices. We note that this process uses structural EM: the sufcient statistics are computed once, and then used for an extended search phase over structures. That, combined with our approach to computing ESS for variables that are far apart in the network, allows us to estimate the ESS for the k-TBN without ever doing inference on it. After we learn such a network, we eliminate the nonMarkovian arcs by creating new hidden variables that\remember" those variables that participate as parents in non-Markovian correlations. Any variable X in time slice t? d which directly in uences a variable Y at time t+ 1 requires d new hidden variables: at time t, the ith introduced variable X? has the same value that X had at time slice t? i. In order to represent this\memory" model exactly, the CPDs of these newly created variables would have to be deterministic. However, deterministic models do not easily accommodate EM-style adaptation. Furthermore, since we want to encourage the search to construct variables that remember global phenomena, we also add\persistence" arcs that allow the hidden variables to depend on longer term past. Therefore, we initialize the parameters for these variables to be noisy versions of the appropriate deterministic CPDs, and make the noise biased toward persistence with the previous time slice of the hidden variable. Having constructed a new 2TBN, we are now again in a position where we can run parametric EM to nd better parameters for the new hidden variables. Thent t t t t t t i

搜索“diyifanwen.net”或“第一范文网”即可找到本站免费阅读全部范文。收藏本站方便下次阅读,第一范文网,提供最新人文社科Discovering the hidden structure of complex dynamic systems(12)全文阅读和word下载服务。

Discovering the hidden structure of complex dynamic systems(12).doc 将本文的Word文档下载到电脑,方便复制、编辑、收藏和打印
本文链接:https://www.diyifanwen.net/wenku/1199022.html(转载请注明文章来源)
热门推荐
Copyright © 2018-2022 第一范文网 版权所有 免责声明 | 联系我们
声明 :本网站尊重并保护知识产权,根据《信息网络传播权保护条例》,如果我们转载的作品侵犯了您的权利,请在一个月内通知我们,我们会及时删除。
客服QQ:xxxxxx 邮箱:xxxxxx@qq.com
渝ICP备2023013149号
Top