Discover our “Incredible” FSP Data Sciences Services at PharmaSUG 2024

“Congratulations on the PMDA approval, that is a huge accomplishment. The submission would never have been possible without your hard work and dedication. Thank you.” Senior Director, Biostatistics Programming, Global Biopharmaceutical Company talking about our Global FSP solutions.

Expertise, hard work, dedication, and high quality have never been more important as clinical trials adapt to macro challenges—finding patient population, rising costs, and scaling new capabilities. The first step to efficiency is who you partner with. Navitas Life Sciences is here to guide you through the intricate landscape of robust Functional Service Provider (FSP) services that can enhance the efficiency and outcomes of clinical research. These include statistical programming in SAS and R, data management, and biostatistics services. As a leading FSP in data services, we specialize in providing comprehensive biostatistics support in clinical trials.

With PharmaSUG 2024 just around the corner, we're excited to showcase how our expertise can elevate your clinical trials to new heights. We met with our expert team, which includes Kathy Greer, Executive Vice President Professional Services, Kalyan Gopalakrishnan, Executive VP, Kevin Viel, Lead SPA I and Timothy Harrington, Senior SPA I in this blog to learn more about where our global FSP can take your clinical trial and why you should catch the team at PharmaSUG 2024.

Kathy Greer

Executive Vice President Professional Services

What trends do you foresee in Data Science services within the life sciences industry, and how is Navitas positioned to address them?

The industry is moving towards open source. We have staff who can program in SAS and R which is what clients are now looking for today. We have R training available for all staff during any down time.

As an executive with vast experience, what strategies do you believe are essential for effective management of clinical FSP?

Dedicated staff for each client ensures focused attention and accountability for our biostatistical and programming services. We assign a senior manager to oversee each client's projects, facilitating clear communication and timely issue resolution. Regular team meetings drive collaboration and alignment on project goals. Most importantly, executive management involvement with all clients demonstrates commitment to their success. We implement governance processes for each client to ensure consistency, quality, and adherence to standards.


Kalyan Gopalakrishnan

Executive VP

What are the advantages of collaborating as a data sciences FSP, and how do you stay updated about advancing technologies and changing regulatory requirements?s

The main advantage is access to specialized expertise on an as-needed basis. We achieve this by participating and contributing to industry forums, partnering with innovative technology companies, and collaborating with our clients to develop a strategic roadmap.

Can you share insights into Navitas Life Sciences' approach to optimizing management of clinical FSP, especially in terms of efficiency and quality?

In our mission, quality is of utmost importance, and there are no optimization parameters. Our focus is always on constant improvement. Navitas Life Sciences strives to efficiently utilize our resources to support our clients' needs in terms of time zone coverage, project execution leadership, relevant therapeutic area experience, data standards, biostatistical services expertise, and yet remain cost-effective.


Kevin Viel

Lead SPA I

What are some key challenges organizations face when utilizing SAS procedures for data visualization, and how does your approach address them?

Without a doubt, the most challenging tasks that we clinical trial programmers face involve creating figures. The admirable evolution of the SAS® System has refined the appearance of plots, increasing the amount of information readily gleaned from them by bounds; the mantra that a SAS program does a mediocre job of producing a graph, but excels at producing tens of them is quite dated. SAS might even produce a publication-quality figure. The options available to programmers to produce elegant figures, with ample textual choices from data labels to tables within the figures, provide important data concisely, but that richness comes at the cost of a high learning curve or, in many of our cases, a high re-learning curve. You may have mild regret about not commenting your programs voluminously two months earlier when you arrived at those discovery plateaus now that you must adopt it to the next delivery (we all do, from time to time, especially since the next assignment will be urgent since already you mastered it in a previous delivery). An experienced programmer keeps intermediate iterations in a library to access in times of inspiration…or exasperation.

While we contemplate which plots to use and how to overlay them or array them in grids, we do not want the selection of attributes, such as line style or color, to burden us. In fact, this aspect of programming can be the crowning feature of the figures, making or breaking our ability to infer from them, dare we say to produce aesthetically pleasing images?

Many organizations, especially given time and resource constraints, may overlook “manual” control of attributes and instead be satisfied that a combination of attributes generated by the procedures and a good legend can resolve any temporary confusion. Beyond seeing the various attributes in the documentation, few organizations explore their appearance (readability) in the context of their output and in distinct combinations (line style, line color, and marker symbol, for instance). They may rely on cycling and controlling the collation order to achieve “assignment”. Assigning, however, a given subject or arm the same attributes from figure to figure may be highly desirable.

The macro that I will present demonstrates a project- or compound-level level Discrete Attribute Map (DATTRMAP) data set that invariantly assigns a combination of attributes to some characteristics, such as a subject, an arm, or age group. It also allows the user to explore the actual appearance of the attributes and their various combinations in their usual context, such as a printed page or on screens of various sizes and resolution. Such an option will help organizations select their standard attributes between which one can distinguish by simply viewing the figure, that is, without zooming, for instance. The organizations can define the priority of cycling, too, so that as new subjects or arms are assigned available combinations of attributes in a known, orderly manner without redundancy. Allowing the programmer to concentrate on the other aspects of programming figures will increase efficiency and standardization while producing figures rich with information.

Could you elaborate on the significance of standardizing validation data sets (VALDS) as matrices indexed by Page, Section, Row, and Columns (PSRC), and how this approach enhances validation and output creation?

Your manager, at 3 o’clock on the Thursday before your week vacation that was approved months ago, approaches your cubicle to inform you that the biostats requests a “few minor updates” to the tables that another team member who is on sick leave produced. The minor changes include adding a new column of a combination of Arms 1 and 2, adding a new section of the frequency of age groups after the summary (means) of Age, and rearranging the order of the rows in the frequency of genotypes section and including genotypes that are not found with a “0”. You recall that in a delivery two years ago the biostats made a similar request to add a column combining groups in a table on which you were the Validation programmer, and the Main programmer managed this request by adding this column as the variable C7, the next available column instead of making it C3. That could be several IF-THEN-ELSE statements and another call to a FREQ macro with the VAR macro parameter set to GENOTYPE. You. HOPE. How hard does despair hit you?

Actually, you will have this done by coffee hour on Friday morning so that the Validation programmer can promote it to Prod and onto review by Friday afternoon if you employ standardization that calls no more than one MEANS procedure and one FREQ procedure per table program and you can “pigeon hole” the values to their respective cells in the updated table “matrix” using the VALDS and macros that I will present. Oh, you have to update the “dimensions” data set, too, but that is a minor and concise task that sometime in the near future will be covered by programmatically reading the shell (or the accessing the data that creates it).

The VALDS (VALidation Data Set) evolved to assist the Validation of TFL (Table, Figure, and Listing) programs, but really it mostly pertains to TL programs. It standardizes the variables and the structure of the data sets (sort order, for instance) produced by the Main program and used to generate the output and that the Validation programmer must replicate with a 100% match (variables and observations). No longer does the Validation programmer need to ponder whether the variables might be named A, B, C… or C1, C2, C3,…et cetera and in WHAT ORDER the Main programmer sorted. Since the SAS® System does not allow a mix of character and numeric values in a variable, the only numeric values are the “sort order” variables. The values derived by the programmer are character variables, which are what are displayed in the output. The shell (VALDS) is considered a matrix, with dimensions of Page, Section, Row, and Column (PSRC). A hash (or other method) links a label, such as PARAM = “Cholesterol (mg/dL)”, to a numeric value, such as PAGE_ORDER_1 = 3, to index the value into its respective cell in the VALDS. If one opens a well-formed VALDS in the SAS Explorer, it should appear “like” the output, aiding the manual (spot) checking of the output against the VALDS.

Changes to the order of pages, sections, rows, or columns entail changing the index associated with the label, but no other change in the program code. This includes additions or eliminations. By “turning” the data set, only one MEANS and one FREQ procedure is required and not a call to a macro for each variable (typically, a Section). The caveat is that one must include the variable in the list of interest with an appropriate label, for instance, the variable RACE might have the Section label “Race, n(%)”, a comparatively trivial task. Cheekily, one might say that this approach adheres strictly to “analysis ready” decree of “one procedure away”. With a judicious use of UPDATE statements, one can rather flexibly populate default values, such as “0” for the N and “” (missing) for other statistics of a summary section. This option is useful when one must include rows from a shell for values (levels) that do not occur in the data.

Standardization improves accuracy and efficiency. In a way, the approach I will present epitomizes programs writing programs. As CDISC has standardized the SDTM and ADaM data sets, this approach standardizes the VALDS and TL programs allowing automation of the production of tables and listings (and, actually, their shells). The approach that I will present shortens the “startup runway” of adding new resources to a project approaching a deadline or that lost essential team members; much like programmers know where to find AEDECOD, they will know how to create a VALDS and use programs that help to automate the creation of output without studying the SOPs/WIs and without learning the conventions of the teams.


Timothy Harrington

Senior SPA I

What are some practical applications of SAS Transpose Procedure in clinical programming, and how can it streamline data manipulation tasks?

Basically, PROC TRANSPOSE arranges a column in a source data set as a series of columns in a destination data set with column names and labels assigned to ID values in the corresponding rows of the source data set. This compacts the data in a more usable format with fewer observations. An example is rearranging observations of lab tests as columns identified as each lab test. This process can be performed using a DATA step and RETAINed variables that the capture each data value and are then output at the LAST. inner most BY value. This involves quite a lot of code and processing, so SAS PROC TRANSPOSE performs all this under this one procedure. This is particularly useful for handling very large volumes of data such as laboratory results, AE data, or VITALS data. The transposed data set, being smaller and more compact, is easier to process.

As a session presenter, what are the topics you will be discussing, and insights can you provide into the current trends and emerging technologies in clinical reporting?

R is increasingly being used for clinical reporting, (TLGs). Personally, I am very impressed by the graphic capability of R, and wonder whether R will, over time, displace more of SAS.

There is also the role of AI, I believe AI will become a very useful tool for identifying patterns and trends, such as efficacy, and statistical correlations.

Supporting Efficient Clinical Biostatistics Services

Biostatistics in clinical research lies at the heart of all trials, ensuring that data is accurately collected, analysed, and interpreted. At Navitas Life Sciences, we offer programming, data management, and biostatistics FSP Data services tailored to meet your unique biostatistics clinical trial needs.

“You guys rock. I can't believe all Clinical Study Report (CSR) Tables, Listings, and Figures (TLFs) are ready now for review right after the Database Lock (DBL). An excellent example of teamwork and the importance of preparation effort prior to the DBL. We appreciate your outstanding work and dedication. Let us keep rolling this way.”
Senior Director of Biostatistics, Global Biopharmaceutical Company

Data Services are indispensable for all investigational drug or device development programs, playing a crucial role in determining the efficacy and effectiveness of clinical trials. This encompasses aspects such as study design, conduct, optimal data collection points, and the methods for analysis and reporting.

At Navitas Life Sciences, our clinical trial statisticians are dedicated to delivering quality results and implementing the most suitable approach for each client's situation. We prioritize accuracy in analysis while avoiding unnecessary complexity. Through the use of appropriate tools and methodologies, we demonstrate the efficacy of a drug and present results in a clear and comprehensible manner.

IDMP

Statistical Programming, Data Management and Biostatistics FSP Services

Our dedicated team of Statistical Programmers, Data Management experts, and Biostatisticians serve as an extension of your in-house team, providing flexible and scalable solutions to meet your project requirements efficiently.

Statistical Programming, Data Management and Biostatistics Consulting Services

Need expert guidance on study design, statistical methodologies, or regulatory submissions? Our Statistical Programming, Data Management, and Biostatistics consulting services provide valuable insights to navigate the complexities of clinical research.

Clinical Biostatistics Services, Statistical Programming Support, and Data Management Packages

From protocol development to final analysis, our FSP data services cover every stage of your trial, ensuring rigorous data analysis and interpretation.

Meet Us at PharmaSUG 2024

Date : 19 – 22 May 2024

Venue : Baltimore, MD

We are delighted to be both attending and presenting at the PharmaSUG 2024 event taking place in Baltimore. Join our team as they take to the podium to present across three of the conference streams: Advanced Programming, Data Visualization and Reporting, and Solution Development. In addition, our Director of Clinical Reporting, Sid Kumar, is on the academic committee representing the Metadata Management stream.

You will find us at the podium presenting the following:

  • AP-138 | An introduction to the SAS Transpose Procedure and its options – presented by Timothy Harrington, Senior SPA I
  • DV-438 | Exploring DATALINEPATTERNS, DATACONTRASTCOLORS, DATASYMBOLS, the SAS System® REGISTRY procedure, and Data Attribute Maps (ATTRMAP) to assign invariant attributes to subjects and arms throughout a project – presented by Kevin Viel, Lead SPA I
  • SD-356 | Standardizing Validation Data Sets (VALDS) as matrices indexed by Page, Section, Row, and Columns (PSRC) to improve Validation and output creation and revisions – presented by Kevin Viel, Lead SPA I

If you would like to arrange to meet with our team and learn how we can help you meet the future challenges of Data Requirements in your clinical trials, please click here.

Navitas Life Sciences can assist with your specific statistical data analysis requirements, regardless of their complexity or scope. We can identify bottlenecks and ensure accurate data processing. Whether you need comprehensive FSP support, expert consulting, or specialized statistical programming, data management, and biostatistics analysis, we have the solutions to drive your trial's success. Connect with us at PharmaSUG 2024 to learn more about how we can elevate your clinical trial outcomes.

To learn more about our services and solutions, reach out to us at This email address is being protected from spambots. You need JavaScript enabled to view it.

Among Top CROs in 2024: Navitas Life Sciences driv...
Advancing Patient-Centric Clinical Trials with Nav...

Solutions

Advisory Services

Clinical Development

Post Marketing

Therapeutics

Core Therapeutics

Interdisciplinary Therapeutics

Niche Therapeutics

Sectors

Governance

About Us