SWARM is a cloud platform aimed at improving analytical reasoning in intelligence work. SWARM tries to improve analytical reasoning by improving collaboration within groups of analysts rather than by trying to structure their thinking in any particular way.
- Robust and stochastic optimization
- Convex analysis
- Linear programming
- Monte Carlo simulation
- Model-based estimation
- Matrix algebra review
- Probability and statistics basics
This article is divided into three parts: the first part explains the definition of the economically dependent self-employed and proposes ideas for improving this definition of this dependency. The second part of this article is dedicated to the working conditions of the self-employed, while the last part compares the job satisfaction of the self-employed, employees and family workers.
The Costs of War Project is a team of 35 scholars, legal experts, human rights practitioners, and physicians, which began its work in 2011. We use research and a public website to facilitate debate about the costs of the post-9/11 wars in Iraq, Afghanistan, and Pakistan.
The NLEstimate macro allows you to estimate one or more linear or nonlinear combinations of parameters from any model for which you can save the model parameters and their variance-covariance matrix. Most modeling procedures which offer ESTIMATE, CONTRAST, or LSMEANS statements only provide for estimating or testing linear combinations of model parameters. However, common estimation problems often involve nonlinear combinations, particularly in generalized models with nonidentity link functions such as logistic and Poisson models.
This sample combines macro programming with PROC FREQ and DATA Step logic to count the number of missing and non-missing values for every variable in a data set. The results are stored in a data set.
This sample illustrates one method of counting the number of missing and non-missing values for each variable in a data set. Two methods for structuring the resulting data set are shown.
The %VARTEST macro provides a one-tailed test of the null hypothesis that the variance equals a non-zero constant for normally distributed data. It also provides point- and confidence interval estimates.
NOTE: The CIBASIC option in PROC UNIVARIATE provides one- and two-sided confidence intervals for the standard deviation and variance. PROC TTEST provides a confidence interval for the standard deviation using either of two methods.
PURPOSE:
The %VARTEST macro tests the null hypothesis that the variance (or standard deviation) of a set of independent and identically normally distributed values is equal to a specified constant against an alternative that the variance (or standard deviation) exceeds the constant. The macro also provides point- and confidence interval estimates for the variance and standard deviation.
NOTE: Beginning in SAS 9.2, the QIC statistic is produced by PROC GENMOD. Beginning in SAS 9.4 TS1M2, QIC is available in PROC GEE.
PURPOSE:
The %QIC macro computes the QIC and QICu statistics proposed by Pan (2001) for GEE (generalized estimating equations) models. These statistics allow comparisons of GEE models (model selection) and selection of a correlation structure.
The SELECT macro performs model selection methods for categorical-response models that can be fit in PROC LOGISTIC. These include models using the logit, probit, cloglog, cumulative logit, or generalized logit links. The macro supports binary as well as ordinal and nominal multinomial models.
Standard model selection is done by choosing candidate effects for entry to or removal from the model according to their significance levels. After completion, the set of models selected at each step of this process is sorted on the selected criterion - AUC, R-square, max-rescaled R-square, AIC, or BIC. The requested number of best models on the selected criterion is displayed.
What we present here is a macro that will automatically check all the numeric variables in a SAS data set for a specific data value, and produce a report showing which variables contain this special value and how many times it appeared. The macro is called FIND_VALUE
Many of us are presented with SAS data sets where codes such as 9999 are intermingled with real data values. Sometimes these codes represent missing values; sometimes they represent other non-data values.
If you run SAS procedures on numeric variables in such a data set, you will, obviously, produce nonsense. What we present here is a macro that will automatically check all the numeric variables in a SAS data set for a specific data value, and produce a report showing which variables contain this special value and how many times it appeared.
The macro is called FIND_VALUE and is presented below. You can download this macro and many other useful macros from the SAS Companion Web Site: support.sas.com/publishing. Search for my book, Cody's Data Cleaning Techniques, Second Edition, and then click on the link to download the programs and data files from the book.
NOTE: Beginning in SAS 9, you can use the ODS GRAPHICS ON; statement and the PLOTS=SCATTER(ELLIPSE=MEAN) or PLOTS=SCATTER(ELLIPSE=PREDICTED) option in the PROC CORR statement to get confidence ellipse plots about the mean or individual values.
PURPOSE:
The %CONELIP macro generates confidence ellipses for bivariate normal data. It can either create ellipses for the data or ellipses about the mean.
NOTE: This macro is obsolete beginning with SAS 8.0. Use the STDIZE procedure in SAS/STAT software beginning in that release.
PURPOSE:
The %STDIZE macro standardizes one or more numeric variables in a SAS data set by subtracting a location measure and dividing by a scale measure. A variety of location and scale measures are provided, including estimates that are resistant to outliers and clustering
NOTE: The MVN macro is obsolete. Beginning in SAS 9.2, use the RANDNORMAL function in SAS/IML software or PROC SIMNORMAL in SAS/STAT software to generate multivariate normal data.
PURPOSE:
The %MVN macro generates multivariate normal data using the Cholesky root of the variance-covariance matrix. Bivariate normal data can be generated using the DATA step.