Phoenix Winnonlin Trial

WINNONLIN 5.3 FREE DOWNLOAD - Name: WINNONLIN 5.3 FREE DOWNLOAD Downloads: 1469 Update: December 24, 2015 File size: 9 MB WINNONLIN FREE DOWNLOAD 5.3 Full version downloads available. Dataset for Phase I randomized clinical trial for safety and tolerability of GET. Pharmacokinetic parameters were derived by noncompartmental analysis using WinNonlin (Version 4.1b. Download Acrobat PDF file (84KB)Help with pdf files. Winnonlin software, free download? Intel widi display exe Atc 2. Download netsparker free demo Cimco edit. If anyone has an idea of downloading the Winonlin software (trial version or free download), please help me.

Published online 2014 Dec 9. doi: 10.1208/s12248-014-9704-6
This article has been cited by other articles in PMC.

All pharmacokinetic parameters were calculated using non-compartmental analysis (NCA) with a validated installation of the software Phoenix® WinNonlin® version 8.1. The PK parameters were reported as geometric mean,%Coefficient of variation, geometric CV%, minimum and maximum, median, arithmetic mean and standard deviation.

Associated Data

Phoenix Winnonlin Trial Download

Supplementary Materials
  • Download Free Phoenix Winnonlin Free Zedload.com provides 24/7 fast download access to the most recent releases. We currently have 392,682 full downloads including categories such as: software, movies, games, tv, adult movies, music, ebooks, apps and much more.
  • The data were analyzed through non-compartmental model using PK software Phoenix Winnonlin version 7. The outcome was measured on logarithmically transformed data, where p 0.05 was considered as non-significant with 90% CI limit of 0.8-1.25.
Phoenix
12248_2014_9704_MOESM1_ESM.ods (47K)
12248_2014_9704_MOESM3_ESM.sas (1.3K)
12248_2014_9704_MOESM5_ESM.txt (149 bytes)
12248_2014_9704_MOESM7_ESM.txt (610 bytes)
12248_2014_9704_MOESM9_ESM.txt (583 bytes)
12248_2014_9704_MOESM11_ESM.txt (32K)
12248_2014_9704_MOESM13_ESM.txt (19K)
12248_2014_9704_Fig1_ESM.gif (13K)
12248_2014_9704_Fig2_ESM.gif (13K)
12248_2014_9704_Fig3_ESM.gif (11K)
12248_2014_9704_Fig4_ESM.gif (12K)
12248_2014_9704_Fig5_ESM.gif (16K)
12248_2014_9704_Fig6_ESM.gif (15K)
12248_2014_9704_Fig7_ESM.gif (16K)
12248_2014_9704_Fig8_ESM.gif (23K)
12248_2014_9704_Fig9_ESM.gif (18K)
12248_2014_9704_Fig10_ESM.gif (16K)
12248_2014_9704_Fig11_ESM.gif (16K)

Wow Trial Download

Phoenix winnonlin trial version

Abstract

Phoenix Winnonlin Trial

In order to help companies qualify and validate the software used to evaluate bioequivalence trials with two parallel treatment groups, this work aims to define datasets with known results. This paper puts a total 11 datasets into the public domain along with proposed consensus obtained via evaluations from six different software packages (R, SAS, WinNonlin, OpenOffice Calc, Kinetica, EquivTest). Insofar as possible, datasets were evaluated with and without the assumption of equal variances for the construction of a 90% confidence interval. Not all software packages provide functionality for the assumption of unequal variances (EquivTest, Kinetica), and not all packages can handle datasets with more than 1000 subjects per group (WinNonlin). Where results could be obtained across all packages, one showed questionable results when datasets contained unequal group sizes (Kinetica). A proposal is made for the results that should be used as validation targets.

Electronic supplementary material

The online version of this article (doi:10.1208/s12248-014-9704-6) contains supplementary material, which is available to authorized users.

KEY WORDS: bioequivalence, parallel design, software validation

Microsoft Office Trial Download

INTRODUCTION

Bioequivalence testing is a general requirement for companies developing generic medicines, testing food effects, making formulation changes, or developing extensions to existing approved medicines where absorption rate and extent to the systemic circulation determines safety and efficacy. Throughout many countries and jurisdictions, the common way of testing for bioequivalence is to compare the pharmacokinetics of the new formulation (“test”) with that of the known formulation (“reference”). Using non-compartmental analysis, the primary metrics derived in bioequivalence studies are most often the area under the concentration time curve until the last sampling point (AUCt) and the maximum observed concentration (Cmax) for both the test and the reference. A confidence interval is then constructed on basis of two one-sided t tests, typically at a nominal α level of 5%. The most common designs for bioequivalence testing are the two-treatment, two-sequence, two-period randomized crossover design and the randomized two-group parallel design. The former is considerably more common than the latter and the design of choice for active ingredients whose half-lives are not prohibitively long (1–4).

To evaluate the data obtained in a bioequivalence trial, companies must use validated software but in the absence of datasets with known results, it is difficult to actually know if the software acquired correctly performs the task it is supposed to do and therefore, it is practically impossible to validate software in-house beyond installation qualification and operational qualification. On that basis, we recently published a paper in this journal with reference datasets for the two-treatment, two-sequence, two-period crossover trials and where the datasets were evaluated with different software packages ().

Since trials with two parallel groups are the second most common type of bioequivalence studies and published datasets with known results are scarce, the purpose of this paper is to propose reference datasets for two-group parallel trials and derive 90% confidence intervals with different statistical software packages in order to establish consensus results that can be used—together with the datasets—to qualify or validate software analyzing the outcomes from parallel group bioequivalence trials. It is outside the scope of this paper to discuss more than two groups or the other design options that exist such as replicate designs.

For validation purposes, datasets should be of varying complexity in terms of imbalance, outliers, range, heteroscedasticity, and point estimate in order to cover any situation which can reasonably be expected to occur in practice. It is not the aim of this work to validate any software or to advocate for or against any specific software package.

MATERIALS AND METHODS

Datasets

The datasets used in this paper seek to include small and large data sets, outliers, unequal group sizes, and heteroscedasticity as a type of stress test. The characteristics of datasets are as follows:

  1. This is the data of the first period from Clayton and Leslie () with balance between treatment groups, i.e., equal sizes of the test treatment (NT) and the reference treatment (NR): NT = NR = 9.

  2. Based on dataset P1, where subjects 10…14 were removed (NT = 9, NR = 4).

  3. Based on dataset P1, where the raw data entry for subject 4 has been multiplied by 100.

  4. This is a dataset simulated on basis of a log-normal distribution, where the simulated geometric means ratio (GMR) was 0.95 with balance between treatment groups (NT = NR = 20) and heteroscedasticity (CVT = 0.25 and CVR = 2.5).

  5. This is a dataset simulated on basis of a log-normal distribution, where the simulated GMR was 1.1 and with slight imbalance (NT = 31, NR = 29) and homoscedasticity (CVR = CVT = 0.05). Although a CV of 0.05 is not realistic for any drug tested in humans; variabilities at this level are often seen with orally inhaled products (e.g., delivered dose). Applicants in Europe may choose the parallel bioequivalence model for their testing. To qualify for approval on basis of such data, the actual requirement is a 90% confidence interval with a 15% equivalence margin corresponding to an acceptance range of 85.00–117.65% when a multiplicative model is used (7).

  6. This is the dataset from Bolton and Bon (8); NT = 24, NR = 26.

  7. This is a dataset simulated on basis of a log-normal distribution, where the simulated GMR was 1.2 with imbalance between treatment groups (NT = 1000 and NR = 200) and heteroscedasticity (CVT =0.25 and CVR = 2.5).

  8. This is a dataset simulated on basis of a log-normal distribution, where the simulated GMR was 1.1 and with balance between treatment groups (NT = NR = 1000) and homoscedasticity (CVR = CVT = 0.5).

  9. This is a dataset simulated on basis of a log-normal distribution, where the simulated GMR was 1.1 and with balance between treatment groups (NT = 1000, NR = 1,000) and heteroscedasticity (CVR = 0.25, CVT = 2.5).

  10. Based on dataset P6, where the raw data entry for subject 138 (receiving test) was multiplied by 100.

  11. Based on dataset P6, where the raw data entries for subjects 1…50 (test) and subjects 1001…1050 (reference) were multiplied by 100,000.

All datasets can be downloaded as tab-delimited text files as “Electronic Supplementary Material” from the journal’s homepage. The supplementary material also presents individual box-plots of the datasets.

Evaluation of Datasets

The datasets were evaluated with

  • R (version 3.0.2, R Foundation for Scientific Computing 2013), running under Windows 8 (64 bit).

  • OpenOffice Calc (version 4.1.0, The Apache Software Foundation 2014), running under Windows 8. A spreadsheet was set up.

  • Filemaker pro mac torrent ita 2015. Phoenix/WinNonlin (64-bit version 6.4.0.768, Pharsight, A Certara Company 2014), running under Windows 7 (64 bit).

  • EquivTest/PK (Statistical Solutions 2006), running in a 32-bit XP Virtual PC under Windows 7 (64 bit).

  • Kinetica (version 5.0.10, Thermo Fisher Scientific 2007), running in a 32-bit XP Virtual PC under Windows 7 (64 bit).

  • SAS (32 bit version 9.2, SAS Institute 2008), running under Windows 7 (64 bit).

The datasets were evaluated with and without the Welch correction for degrees of freedom (ν), corresponding to either assuming heteroscedasticity (unequal variances) or homoscedasticity (equal variances), respectively, of treatment groups.

In the Welch correction, the degrees of freedom are approximated as

where sT2 and sR2 are the variances, and NT and NR are the sample sizes of groups treated by the test and reference, respectively.

If no Welch correction is applied the degrees of freedom are

2

In all cases, datasets were evaluated towards construction of a 90% confidence interval (CI) around the ratio of log-transformed sample means (point estimate, PE), corresponding to a nominal α level of 5%

where t1 − α,ν is the one-sided critical value of Student’s t distribution at ν degrees of freedom for the given α (in this paper 0.05) and MSE is the mean square error (estimated pooled variance).

R and SAS are script-/code-based applications. Examples of scripts used for these two software packages are uploaded as a supplementary material. They should be adapted to the user’s preferences. The OpenOffice Calc spreadsheet is also uploaded as a supplementary material.

Phoenix/WinNonlin natively does not offer evaluations based on the Welch correction. A workaround (9,10) was used in generating results given in Table TableI.I. The Welch correction appears to be impossible in the menu-driven software packages EquivTest/PK and Kinetica.

Table I

Results obtained with the different statistical packages with Welch correction

Dataset90% confidence interval (point estimate)
ROO CalcWinNonlinSAS
P126.78–88.14 (48.58)26.78–88.14 (48.58)26.78–88.14 (48.58)a26.78–88.14 (48.58)
P223.71–74.38 (41.99)23.71–74.38 (41.99)23.71–74.38 (41.99)a23.71–74.38 (41.99)
P324.40–449.08 (104.67)24.40–449.08 (104.67)24.40–449.08 (104.67)a24.40–449.08 (104.67)
P438.05–136.15 (71.97)38.05–136.15 (71.97)38.05–136.15 (71.97)a38.05–136.15 (71.97)
P5106.44–112.10 (109.23)106.44–112.10 (109.23)106.44–112.10 (109.23)a106.44–112.10 (109.23)
P691.84–115.79 (103.12)91.84–115.79 (103.12)91.84–115.79 (103.12)a91.84–115.79 (103.12)
P797.38–138.51 (116.14)97.38–138.51 (116.14)b97.38–138.51 (116.14)
P8105.79–113.49 (109.57)105.79–113.49 (109.57)b105.79–113.49 (109.57)
P9103.80–120.61 (11.89)103.80–120.61 (11.89)b103.80–120.61 (11.89)
P1097.82–139.17 (116.68)97.82–139.17 (116.68)b97.82–139.17 (116.68)
P116.30–21.60 (11.67)6.30–21.60 (11.67)b6.30–21.60 (11.67)

Ninety percent confidence intervals and point estimates are given in percent of the reference rounded to two decimals as suggested by the FDA and EMA. Calculation apparently not possible in EquivTest/PK and Kinetica

aResults obtained by the workaround (9)

bNot applicable due to the software’s limitations

RESULTS

Table TableII presents the 90% confidence intervals with point estimates evaluated by the different software packages if equal variances are not assumed (i.e., the Welch correction is applied).

Table TableIIII presents the corresponding results if equal variances are assumed.

Table II

Results obtained with the different statistical packages without Welch correction

Dataset90% confidence interval (point estimate)
ROO CalcWinNonlinEquivTestKineticaSAS
P127.15–86.94 (48.58)27.15–86.94 (48.58)27.15–86.94 (48.58)27.15–86.94 (48.58)27.15–86.94 (48.58)27.15–86.94 (48.58)
P218.26–96.59 (41.99)18.26–96.59 (41.99)18.26–96.59 (41.99)18.26–96.59 (41.99)15.76–119.00 (41.99)18.26–96.59 (41.99)
P326.35–415.71 (104.67)26.35–415.71 (104.67)26.35–415.71 (104.67)26.35–415.71 (104.67)26.35–415.71 (104.67)26.35–415.71 (104.67)
P438.60–134.21 (71.97)38.60–134.21 (71.97)38.60–134.21 (71.97)38.60–134.21 (71.97)38.60–134.21 (71.97)38.60–134.21 (71.97)
P5106.44–112.10 (109.23)106.44–112.10 (109.23)106.44–112.10 (109.23)106.44–112.10 (109.23)106.39–112.14 (109.23)106.44–112.10 (109.23)
P691.85–115.78 (103.12)a91.85–115.78 (103.12)a91.85–115.78 (103.12)a91.85–115.78 (103.12)a92.07–115.50 (103.12)91.85–115.78 (103.12)a
P7106.86–126.23 (116.14)106.86–126.23 (116.14)106.86–126.23 (116.14)106.86–126.23 (116.14)104.30–129.32 (116.14)106.86–126.23 (116.14)
P8105.79–113.49 (109.57)105.79–113.49 (109.57)105.79–113.49 (109.57)105.79–113.49 (109.57)105.79–113.49 (109.57)105.79–113.49 (109.57)
P9103.80–120.61 (111.89)103.80–120.61 (111.89)103.80–120.61 (111.89)103.80–120.61 (111.89)103.80–120.61 (111.89)103.80–120.61 (111.89)
P10107.20–126.99 (116.68)107.20–126.99 (116.68)107.20–126.99 (116.68)107.20–126.99 (116.68)104.59–130.16 (116.68)107.20–126.99 (116.68)
P117.83–17.38 (11.67)7.83–17.38 (11.67)7.83–17.38 (11.67)7.83–17.38 (11.67)6.98–19.51 (11.67)7.83–17.38 (11.67)

Ninety percent confidence intervals and point estimates are given in percent of the reference rounded to two decimals as suggested by the FDA and EMA

aAccords with Bolton and Bon (8)

DISCUSSION

The conventional t test is fairly robust against violations of homoscedasticity but quite sensitive to unequal group sizes (). Furthermore, preliminary testing for equality of variances is flawed and should be avoided (12). If assumptions are violated, the t test becomes liberal, i.e., the patient’s risk might exceed the nominal level and an alternative (e.g., based on the Welch correction) is suggested (,13). FDA’s guidance states: “For parallel designs […] equal variances should not be assumed” (14).

In WinNonlin, a somewhat cumbersome workaround allows a Welch correction for construction of the confidence interval, whereas in EquivTest/PK and Kinetica, there appears to be no option to evaluate two-group parallel designs with the Welch correction for unequal variances. Due to limitations of WinNonlin (fixed factors are restricted to 1000 levels), we were not able to process datasets P7–P11 with the published workaround.

Kinetica consistently arrived at confidence intervals that differed from the other packages if datasets had unequal group sizes (i.e., P2, P5–7, P10–P11). A similar phenomenon was noted in a previous publication presenting balanced and imbalanced reference datasets for two-treatment, two-sequence, two-period crossover bioequivalence trials (). Although we do not have access to Kinetica’s source code, it is noted that when group sizes are equal (that is, when NT = NR), Eq. (3) can be simplified to

4

and if this version of the equation uncritically is applied to imbalanced datasets, then

  1. The point estimate remains correct.

  2. The width of the confidence interval will change. The use of Eq. (4) could therefore increase the chance of products appearing approvable (confidence intervals narrower) when NT < NR and could decrease the chance of products appearing approvable (confidence intervals wider) when NT > NR .

  3. The results coincide with Kinetica’s that we found for the investigated datasets. Equation (4) is stated in the user manual (15).

While it is outside the scope of this paper to specifically debate any specific software’s general fitness for bioequivalence evaluations, the fact that different packages arrive at different results with identical datasets underscore the need for proper validation. The datasets and evaluations presented here are relatively simple; in various areas of drug development, much more sophisticated statistical models with many more numerical settings are available. Examples include mixed effect models used for longitudinal data or replicated bioequivalence trials, and survival or time-to-event trials evaluated by Kaplan-Meier derivatives, to mention but a few. To help companies qualify or validate their software for such evaluations, we call for other authors to publish datasets of varying complexity with known results that can be confirmed by multiple statistical packages.

CONCLUSION

This paper releases 11 datasets into the public domain along with proposed consensus results in order to help companies qualify and validate their statistical software for evaluation of bioequivalence trials with two treatment groups. The datasets are evaluated with different statistical packages with and without the assumption of equal variances for the construction of the 90% confidence interval. Not all packages are able to use a Welch correction for unequal variances when analyzing the datasets and not all packages can handle the largest datasets. In addition, one package seemed to arrive at results that stand in contrast to others. We propose the results obtained with R, which are identical to those obtained with SAS (and WinNonlin, where possible) to be targeted by companies qualifying or validating their software. The datasets are available as supplementary material.

Electronic Supplementary Material

Contributor Information

Anders Fuglsang, Email: moc.amrahpgnaslguf@ufna.

Helmut Schütz, Phone: +43 12311746, Email: ta.cabeb@zteuhcs.tumleh.

Detlew Labes, Phone: +49 3342237973, Email: moc.gadrdcc@sebald.

References

1. European Medicines Agency, Committee for Human Medicinal Products. Guideline on the investigation of bioequivalence. CPMP/EWP/QWP/1401/98 Rev. 1/ Corr. 2010.
2. US Food and Drug Administration. Bioavailability and bioequivalence studies for orally administered drug products—general considerations. Revision 1. 2003.
3. World Health Organization. Multisource (generic) pharmaceutical products: guidelines on registration requirements to establish interchangeability. In: Fortieth report of the WHO Expert Committee on Specifications for Pharmaceutical Preparations. Geneva, World Health Organization. WHO Technical Report Series, No. 937, Annex 7. 2006.
4. Health Canada, Therapeutic Products Directorate. Conduct and analysis of comparative bioavailability studies. 2012.
5. Schütz H, Labes D, Fuglsang A. Reference datasets for 2-treatment, 2-sequence, 2-period bioequivalence studies. The AAPS J. 2014;16:1292–7. doi: 10.1208/s12248-014-9661-0.[PMC free article] [PubMed] [CrossRef] [Google Scholar]
6. Clayton D, Leslie A. The bioavailability of erythromycin stearate versus enteric-coated erythromycin base when taken immediately before and after food. J Int Med Res. 1981;9:470–7. [PubMed] [Google Scholar]
7. European Medicines Agency, Committee for Medicinal Products for Human Use. Guideline on the requirements for clinical documentation for orally inhaled products (OIP) including the requirements for demonstration of therapeutic equivalence between two inhaled products for use in the treatment of asthma and chronic obstructive pulmonary disease (COPD) in adults and for use in the treatment of asthma in children and adolescents. CPMP/EWP/4151/00 Rev.1. 2009.
8. Bolton S and Bon C. Statistical consideration: alternate designs and approaches for bioequivalence assessments. In: Kanfer I and Shargel L, editors. Generic Drug Product Development. Bioequivalence Issues. New York: Informa Healthcare; 2008. p. 123–141.
9. Hughes L, Adjusting for Unequal Variances in WNL Bioeq. Certara Forums, Phoenix WNL Basics, 1 July 2013 http://www.certara.com/forums/topic/453-adjusting-for-unequal-variances-in-wnl-bioeq/ accessed 5 November 2014; free registration required

Visio Trial Download

10. Anon. Phoenix WinNonlin 6.4 User’s Guide, Pharsight 2014, p. 86.
11. Wang H, Chow S-C. A practical approach for comparing means of two groups without equal variance assumption. Statist Med. 2002;21:3137–51. doi: 10.1002/sim.1238. [PubMed] [CrossRef] [Google Scholar]
12. Moser BK, Stevens GR. Homogeneity of variance in the two-sample means test. Am Stat. 1992;46:19–21.[Google Scholar]
13. Ruxton GD. The unequal variance t-test is an underused alternative to Student’s t-test and the Mann–Whitney U test. Behav Ecol. 2006;17:688–90. doi: 10.1093/beheco/ark016. [CrossRef] [Google Scholar]
14. US Food and Drug Administration. Statistical approaches establishing bioequivalence. 2001.
15. Anon. Kinetica 5.0 User Manual Revision Number 1.00. Thermo Fisher Scientific Inc. 2008. p. 667.
Articles from The AAPS Journal are provided here courtesy of American Association of Pharmaceutical Scientists
Helmut
★★★
Vienna, Austria,
2015-07-20 17:34
Posting: # 15110
Views: 15,221

Certara (Pharsight): Phoenix/WinNonlin licensing policy [Software]

Dear all,
Certara (the company behind Pharsight) changed their pricing policy for CROs (only!). Additionally to the annual single-user license fee (2,444 USD) CROs will be charged a fee based on the number of studies/projects performed per year… Part of the business-lingo I received:

As services providers use the Phoenix software on behalf of many sponsors and their compounds, they generally receive far greater value from these tools than companies using the software of their own com­pounds.
The premium is based on the number of preclinical or clinical studies the software is used in.
A Study/Project will be considered a discrete trial for a discrete indication for a discrete molecule. For cost purposes, multiple trial “options” or “iterations” or “multiple deliverables” against a discrete trial / indication / molecule would still be considered a single “study” or “project”.

studies fee USD USD/study
1 – 10 1,500 394 – 3,944
11 – 50 7,500 199 – 904
51 – 150 22,500 166 – 489
151 – 300 45,000 158 – 314

45 days before the end of the contract period, you will be asked to submit a certification document to reconcile the # of actual studies completed to those you purchased.
If you have gone over the Tier paid for, Certara would ask for the difference at that time.


For a small CRO (say ≤50 studies/year) that’s an increase in cost of 307%. Wow!
Another trick: Until recently the single user license was issued for a particular machine. Now it is issued for a particular user. If in the past two users ran the software (though not simultaneously) on the same machine, no problem. No way with the new “licensing model”. An easy way to double Certara’s revenue squeezed out from small CROs.

Cheers,
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
Lucas

Brazil,
2015-07-20 19:32
@ Helmut
Posting: # 15111
Views: 13,896

Certara (Pharsight): Phoenix/WinNonlin licensing policy

Hello guys.
Yep, I also receive this news recently, when I was speaking with Certara's agent in South America. They've increased the price considerably with this new policy, and they may lose market due to that. I do think Phoenix is a great software for BA/BE and use it now for many years, but IMHO this is a very risky strategy for them. We use here the floating licenses, that could be used in any computer connected to our server, one user per license but could be any user.. Small CROs will be indeed squeezed. The license fee will be cheaper than this extra fee. And also, don't know how they'll account for the studies conducted, but CROs sometimes uses the software with investigational non commercial purposes.. I wonder if those will count.
May be time to improve my R 'almost non-existing' skills!
Lucas
Helmut
★★★
Vienna, Austria,
2015-07-21 01:42
@ Lucas
Posting: # 15112
Views: 13,721
Hi Lucas,
» […] they may lose market due to that. I do think Phoenix is a great software for BA/BE and use it now for many years, but IMHO this is a very risky strategy for them.

Agree. I use it since the DOS-version (PCNONLIN 1) back in 1986 and WinNonlin 1 (1998). The license-fee increased expo­nential (I have no earlier data in my files: from USD 949 in 2004 to 2,444 this year).
Even if I take the US’ inflation into account that’s +7.2% / year. Nice business model. Maybe I should seriously reconsider my consultancy fees…
» […] And also, don't know how they'll account for the studies conducted, …

I guess small CROs – which will not jump ship – will opt for the lowest fee (still an amazing cost increase of 75% from the 2014 fee of 2,254 USD). Then what?

[…] you will be asked to submit a certification document to reconcile the # of actual studies completed to those you purchased.

A “certification document”? Oh yes, I performed 100, but state 5. Will Certara hire a private dick sneaking into my office and check? C’mon! Or is there a hidden counter in the software which phones home every time I execute a workflow?
» … but CROs sometimes uses the software with investigational non commercial purposes.. I wonder if those will count.

Yep. I regularly recalculate stuff posted here (~10/year). Apart from Pharsight’s employees I’m the top-poster in Certara’s Phoenix-Forum. Similar over there. Shall I pay for sumfink I do in my spare time at no costs? Gimme a break!
» May be time to improve my R 'almost non-existing' skills!

Why not? Many companies use PHX/WNL for NCA only (and run the stats in SAS). Package bear is an alternative for NCA in R.
Have your heard about RapidNCA? Seems that only the bloody linear trapezoidal method is implemented.

Cheers,
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
nobody
nothing
2015-07-21 09:19
@ Helmut
Posting: # 15114
Views: 13,597
» Or is there a hidden counter in the software which phones home every time I execute a workflow?

..and then there was the story of this S-plus CD I could not install some years later as the antivirus/firewall on my Microtrash machine complained about some malware..
Just saying..

Photoshop Trial Download

mittyri
★★
Russia,
2015-07-21 09:46
@ Helmut
Posting: # 15115
Views: 13,590
Hi Helmut!
» Many companies use PHX/WNL for NCA only (and run the stats in SAS).

I heard that some CRO's do that, could you explain it? Don't they trust the BEQ module in Phoenix/WinNonlin?
As you wrote here, there are some discrepancies between WNL partial tests and SAS Type III, but is it so important thing for final results? I don't think so..
Why are they paying the double price (SAS+WNL)?

Phoenix Winnonlin Trial Version

Helmut
★★★
Vienna, Austria,
2015-07-21 12:29
@ mittyri
Posting: # 15117
Views: 13,767
Hi Mittyri,
» » Many companies use PHX/WNL for NCA only (and run the stats in SAS).
»
» I heard that some CRO's do that, could you explain it?
  1. I see it in many publications and the majority (!) of reports moving around my desk.
    When I search the forum with the keyword “SAS” I get 1,000+ hits. SAS+bioequivalence gives 49,000 Google-hits.
  2. I will try at the end (educated guesses / crystal ball gazing / tassology).

» Don't they trust the BEQ module in Phoenix/WinNonlin?

Duno. Given the posts in the forum asking for solutions in SAS maybe those members could answer this question?
» As you wrote here, there are some discrepancies between WNL partial tests and SAS Type III, but is it so important thing for final results? I don't think so..

Correct; doesn’t matter at all. The residual variance / means – and therefore, the PE and CI – are independent from this stuff.
» Why are they paying the double price (SAS+WNL)?

I guess it is a combination of tradition, using what they have already, etc. SAS is part of the bio­sta­tis­tical curriculum (though R is catching up). I know some (young) statisticians which are familiar with SAS and had no idea about NCA when they start their career in the industry. Instead of pro­gramming macros themselves and/or trusting what’s on the net1,2 they tell their boss “It will take weeks to code/validate that. Let’s pay some bucks for a Kodak3… Sooner or later one will not only need NCA, but more sophisticated features like Nonparametric Superposition, basic modeling in order to optimize sampling schedules, etc. I would not start that from scratch in SAS.
On the other hand agencies published SAS-code for some BE-methods (FDA: RSABE for HVDs and NTIDs; EMA: ABEL). It is certainly easier to copy-paste code compared to setting up / validating it in PHX/WNL (took me some weeks). Up to now nobody succeeded to code this stuff in R.
  1. Matos-Pita AS, de Miguel Lillo B. Noncompartmental Pharmacokinetics and Bioequivalence Analysis. PharmaSUG (Pharmaceutical Industry SAS® Users Group), May 22–25, 2005, Phoenix, AZ, USA.
    “The performance and validity of the program was tested against WinNonlin®, one of the most commonly used programs for pharmacokinetic analysis in the Pharmaceutical Industry. The results of twenty bioequivalence clinical trials were evaluated using both WinNonlin and SAS. PROC COMPARE of SAS was used to test for differences. There was a 100% agreement in all 20 studies.”
  2. He J. SAS Programming to Calculate AUC in Pharmacokinetic Studies—Comparison of Four Methods in Concentration Data. PharmaSUG, June 1-4, 2008, Atlanta, GA, USA.

Cheers,
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
d_labes
★★★
Berlin, Germany,
2015-07-21 13:04
@ Helmut
Posting: # 15119
Views: 13,536
Dear Helmut,
» On the other hand agencies published SAS-code for some BE-methods (FDA: RSABE for HVDs and NTIDs; EMA: ABEL). It is certainly easier to copy-paste code compared to setting up / validating it in PHX/WNL (took me some weeks). Up to now nobody succeeded to code this stuff in R.

If you mean the statistics of such methods: Which one do you like first? I could imagine some ambitious R-coders out there which could be able to do the job.
Except of course implementing the SAS Proc mixed code for replicate designs .
BTW: The reason for using SAS for the stats is very, very easy: 'SAS is validated.'
Helmut
★★★
Vienna, Austria,
2015-07-21 13:24
@ d_labes
Posting: # 15121
Views: 13,552
Dear Detlew,
» » Up to now nobody succeeded to code this stuff in R.
»
» I could imagine some ambitious R-coders out there which could be able to do the job.

Sure. Not complicated at all.
» Except of course implementing the SAS Proc mixed code for replicate designs .

This is exactly what I had in mind.
» BTW: The reason for using SAS for the stats is very, very easy: 'SAS is validated.'

True, of course.

Cheers,
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes

Winnonlin Trial Version Download

luvblooms
★★
India,
2015-07-21 09:48
(edited by luvblooms on 2015-07-21 11:31)
@ Helmut
Posting: # 15116
Views: 13,654
Hi HS,
» » […] And also, don't know how they'll account for the studies conducted, …
» Oh yes, I performed 100, but state 5. Will Certara hire a private dick sneaking into my office and check? C’mon! Or is there a hidden counter in the software which phones home every time I execute a workflow?

Yes. Couple of days back, I had a word with a 'scientist' who is with Certara and he informed me about the proposed fee model (the one that we got in mail) and also that there will be a counter/Log sort of ting in the tool which will record the total run studies.
» Yep. I regularly recalculate stuff posted here (~10/year). Apart from Pharsight’s employees I’m the top-poster in Certara’s Phoenix-Forum. Similar over there. Shall I pay for sumfink I do in my spare time at no costs? Gimme a break!

Time to increase your consultancy fee accordingly
» Why not? Many companies use PHX/WNL for NCA only (and run the stats in SAS). Package bear is an alternative for NCA in R.

There is one more tool (actually it is a set of software under name Rx Express) called as PK express. Have checked it a couple of time when it was in beta phase and flow looks quite easy. Though there were some flaws in it initially but company is working hard to improve it. Also heard that FDA had evaluated their software and have said ok with some minor suggestions.
AFAIK, their proposed fee structure is very lucrative.
Wonder if anyone else on the forum evaluated that. If yes do share the thoughts on the quality.
» Have your heard about RapidNCA? Seems that only the bloody linear trapezoidal method is implemented.

Hearing this for first time. Let me check it out.
Helmut
★★★
Vienna, Austria,
2015-07-21 13:03
@ luvblooms
Posting: # 15118
Views: 13,604
Hi Luv,
» » Or is there a hidden counter in the software which phones home every time I execute a workflow?
»
» Yes. Couple of days back, I had a word with a 'scientist' who is with Certara and he informed me about the proposed fee model (the one that we got in mail) and also that there will be a counter/Log sort of ting in the tool which will record the total run studies.

That’s bizarre. I regularly re-evaluate already completed projects (exploring new PK metrics, impact of other sampling schedules, Nonparametric Superposition, …) in order to design new studies. Will that increase the counter? Oh dear!
Furthermore, how will Certara access the counter? I have both a hardware firewall on my router and a software firewall on my machine. Trojan, backdoor, rootkit in the “Pharsight Licensing Wizard”? Any­body out there already installed the new license? What is stated in the EULA we all click away that fast?
» » Shall I pay for sumfink I do in my spare time at no costs? Gimme a break!
»
» Time to increase your consultancy fee accordingly

I try hard. “Clever” clients try to issue contracts where my fee is fixed for some years…
» There is one more tool […] PK express.

Looks promising. I just requested a trial-version. At the bottom of the webpage I found

Validation Kit

Let’s see what that means (e.g., surviving our reference data sets)…
» AFAIK, their proposed fee structure is very lucrative.

Couldn’t find anything on the website. Can you give me an idea?
» » Have your heard about RapidNCA?
»
» Hearing this for first time. Let me check it out.

Brand-new. Version 1 was released on March 1, 2015.
PS: Seems to be a “popular” topic. ~15 views / hour so far.

Cheers,
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
Lucas

Brazil,
2015-07-21 13:31
@ Helmut
Posting: # 15122
Views: 13,599
Hi Helmut!
» I guess small CROs – which will not jump ship – will opt for the lowest fee (still an amazing cost increase of 75% from the 2014 fee of 2,254 USD). Then what?

[…] you will be asked to submit a certification document to reconcile the # of actual studies completed to those you purchased.

A “certification document”? Oh yes, I performed 100, but state 5. Will Certara hire a private dick sneaking into my office and check? C’mon! Or is there a hidden counter in the software which phones home every time I execute a workflow?

I asked about this, they said that the studies we conduct are confidential and they'll not be able to control that. So they would not know how many studies we did in fact. This log that Luvblooms was talking seems bizarre, since the number of times I use Phoenix is not the same number of studies I conduct, as established above.. And we haven't even talked about the studies with two or three analytes, that sometimes we have to run separates workflows (different concentration units, sampling schedule, etc.).
» » May be time to improve my R 'almost non-existing' skills!
»
» Why not? Many companies use PHX/WNL for NCA only (and run the stats in SAS). Package bear is an alternative for NCA in R.

Time goes by in the rush of our daily routines, and unfortunately little time is left for developing new skills.. But I gotta find some room for that in my day, maybe I could squeeze that in 00-06am.
I use R now and then, but still a bit painful to use it comparing with Phoenix.
» Have your heard about RapidNCA? Seems that only the bloody linear trapezoidal method is implemented.

Well, this is the first time I'm hearing about this also.. Really seems nice, have you checked it out personally? I couldn't find a demo version to download.
Regards
Lucas
Helmut
★★★
Vienna, Austria,
2015-07-21 13:40
@ Lucas
Posting: # 15123
Views: 13,459
Hi Lucas,
» This log that Luvblooms was talking seems bizarre, since the number of times I use Phoenix is not the same number of studies I conduct, as established above.. And we haven't even talked about the studies with two or three analytes, that sometimes we have to run separates workflows (different concentration units, sampling schedule, etc.).

X-actly. Crazy.
» Time goes by in the rush of our daily routines, and unfortunately little time is left for developing new skills.. But I gotta find some room for that in my day, maybe I could squeeze that in 00-06am.

Sleep deprivation might give you an interesting state of mind…
» » Have your heard about RapidNCA?
»
» Well, this is the first time I'm hearing about this also..

Only useful if one has another package to perform the stats.
» Really seems nice, have you checked it out personally? I couldn't find a demo version to download.

1. No and 2. Me not either.

Cheers,
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes

Phoenix Winnonlin Free Trial

jag009
★★★
NJ,
2015-07-22 16:16
@ Helmut
Posting: # 15126
Views: 13,407

Certara (Pharsight): Phoenix/WinNonlin licensing policy

Hi Helmut!
» Certara (the company behind Pharsight) changed their pricing policy for CROs (only!). Additionally to the annual single-user license fee (2,444 USD) CROs will be charged a fee based on the number of studies/projects performed per year… Part of the business-lingo I received:..

That's a big load of BS. How much money do they make from this software anyway??? Maybe later on they will make the program as a web based program, who knows!
Not to the mention the software still runs like a pig. On my 8gb i5 it's not running any faster than on my old P4 w 4gb ram. Opening and closing a project take a forever (IVIVC especially).
The only reason I use it is because I am too lazy to write SAS codes for determining half-life. Heck they don't even have an option to run partial+full replicate BE analysis yet (I mean within the program, not using your worksheet).
John

Phoenix Winnonlin Trial Results

Mcafee Trial Download

Phoenix Winnonlin Free Trial

Helmut
★★★
Vienna, Austria,
2015-07-22 17:51
@ jag009
Posting: # 15129
Views: 13,455
Hi John,
» The only reason I use it is because I am too lazy to write SAS codes for determining half-life.

See Ref #1 in the post above. Of course, no option to select data points during visual inspection of fits. The 100% agreement with WinNonlin the authors proudly reported clearly means only the “automatic” method based on the maximum R²adj.
» Heck they don't even have an option to run partial+full replicate BE analysis yet (I mean within the program, not using your worksheet).

Yep. If I wouldn’t told them I guess they wouldn’t know until today that RSABE exists and what it means. Took me ~10 years to convince them that the reported power is for the dead and buried 80/20 rule. Since v6.4 we have – additionally – power for the TOST. Wow!
BTW, if using SAS one should know what to do as well. Today I got a response from the large Indian CRO (back-story: useless post-hoc power 29% but reported by SAS with 100%). The answer of the CRO’s “statistician” was:

Further, we have checked the analysis part pertaining to power calculation once again and also veri­fied with the results from WinNonlin software Version 5.3. And, it proves that the results pro­vided earlier are ok.

2×2=5… How I love the word “verify”. Rubbish in, rubbish out.
Convincing the “Far Side” to implement a workaround (!) for unequal variances in parallel designs was even tougher. Took me merely 13 years.

Cheers,
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes